docs/guides/general/: migrate from wiki & md files, clean up

docs/guides/general/remote-access.rst: clean up documentation
This commit is contained in:
classabbyamp 2022-12-06 13:36:44 -05:00 committed by Zach Dykstra
parent 221014f0a3
commit 31fc472621
9 changed files with 752 additions and 255 deletions

View File

@ -1,152 +0,0 @@
# Boot Environments and You: A Primer
ZFSBootMenu adapts to a wide range of system configurations by making as few
assumptions about filesystem layout as possible. When looking for Linux kernels
to boot, the *only* requirements are that
1. At least one ZFS pool be importable,
2. At least one ZFS filesystem on any importable pool have *either* the
properties
- `mountpoint=/` and **not** `org.zfsbootmenu:active=off`, or
- `mountpoint=legacy` and `org.zfsbootmenu:active=on`
For filesystems with `mountpoint=/`, the property `org.zfsbootmenu:active`
provides a means to opt **out** of scanning this filesystem for kernels. For
filesystems with `mountpoint=legacy`, `org.zfsbootmenu:active` provides a
means to opt **in** to scanning this filesystem for kernels. Filesystems
that do not satisfy these conditions are *never* touched by ZFSBootMenu.
3. At least one of the scanned ZFS filesystems contains a `/boot` subdirectory
that contains at least one paired kernel and initramfs.
ZFSBootMenu will present a list of all ZFS filesystems that satisfy these
constraints. Additionally, if any filesystem provides more than one paired
kernel and initramfs, it is possible to choose which kernel will be loaded
should that filesystem be selected for booting. (It is, of course, possible to
automate the selection of a filesystem and kernel so that ZFSBootMenu can boot
a system without user intervention.)
## Finding Kernels
Although it may be possible to compile a kernel with built-in ZFS support that
would be capable of booting from a ZFS root without an initramfs, this is not
standard practice and would require considerable expertise. Consequently,
ZFSBootMenu requires that a kernel be matched with an initramfs image before it
will attempt to boot the kernel. ZFSBootMenu tries hard to identify matched
pairs of kernels and initramfs images as installed by a wide range of Linux
distributions. As noted above, kernel and initramfs pairs are required to
reside in a `/boot` subdirectory of ZFS filesystem scanned by ZFSBootMenu. The
kernel must begin with one of the following prefixes:
- vmlinuz
- vmlinux
- linux
- linuz
- kernel
After the prefix, the name of a kernel may be optionally followed by a hyphen
(`-`) and an arbitrary string, which ZFSBootMenu considers a *version*
identifier.
ZFSBootMenu attempts to match one of several possible initramfs names for each
kernel it identifies. Broadly, an initramfs is paired with a kernel when its
name matches one of four forms:
- `initramfs-${label}${extension}`
- `initramfs${extension}-${label}`
- `initrd-${label}${extension}`
- `initrd${extension}-${label}`
The value of `${extension}` may be empty or the text `.img` and may
additionally be followed by one of several common compression suffixes: `gz`,
`bz2`, `xz`, `lzma`, `lz4`, `lzo`, or `zstd`. The value of `${label}` is either
- The full name of the kernel file with path components removed, *e.g.*,
`vmlinuz-5.15.9_1` or `linux-lts`; or
- The version part of a kernel file (if the kernel contains a version part):
- For `vmlinuz-5.15.9_1`, this is `5.15.9_1`;
- For `linux-lts`, this is `lts`.
ZFSBootMenu prefers the more specific label (the full kernel name) when it
exists.
## Boot Environments
Internally, ZFSBootMenu does not understand the concept of a boot environment.
When it finds a suitable kernel and initramfs pair, it will load them and
invoke `kexec` to jump into the chosen kernel. In fact, ZFSBootMenu doesn't
even require that a "root" filesystem be the real root that a kernel and
initramfs will mount. It would be possible, for example, to mount a ZFS
filesystem at `/kernels` and install kernels and matching initramfs images to
the `/kernels/boot` subdirectory. As long as the `/kernels` filesystem has a
`mountpoint` property (along with `org.zfsbootmenu:active` if needed),
ZFSBootMenu will identify the kernels even if the filesystem at `/kernels`
contains nothing besides the `boot` subdirectory.
Although ZFSBootMenu ensures maximum flexibility by imposing minimal
assumptions on filesystem layout, not all layouts are equally sensible. For
straightforward maintenance and administration, it is recommended that each
Linux operating system that you wish to boot be stored as a self-contained boot
environment. Conceptually, the ZFSBootMenu team recommends that a *boot
environment* consist of a **single** ZFS filesystem that contains all of the
*coupled* system state for that environment. Coupled system state embodies the
executables, configuration and other files that are critical to proper system
operation and must generally be kept consistent at all times. In most systems,
coupled system state tends to be maintained by a package manager. The package
manager might install programs in `/usr/bin`, configuration in `/etc` and other
files throughout the filesystem. The package manager itself probably maintains
a database of installed packages somewhere in `/var`.
ZFSBootMenu is certainly capable of booting an environment that mounts separate
filesystems at `/` and other paths like `/etc`, `/usr` or `/var`. ZFSBootMenu
never needs to understand these details because either the initramfs or root
filesystem will assume responsibility for mounting all filesystems it needs.
However, a key benefit of boot environments is *atomicity*. In general, it is
bad to allow the contents of `/usr` to become inconsistent with the package
manager database on `/var`. Configuration files in `/etc` can often be tied to
specific versions of software, so they should be kept consistent as well. When
these directories live on different filesystems, ensuring consistency becomes
much more challenging.
For example, suppose that a software update has gone wrong and a program has
been overwritten by a corrupt or buggy version. With ZFS snapshots, `zfs
rollback` is sufficient to restore functionality. However, when `/usr` and
`/var` reside on different filesystems, both must be rolled back to the same
point in time. When the filesystems are on different snapshot schedules (or
there is some delay between snapshotting one after the other), deciding which
snapshots represent consistent state may not be a trivial task.
To some extent, this could be remedied with a recursive snapshot scheme that
provides uniform nomenclature for consistent snapshots across multiple
filesystems. However, ZFSBootMenu strives to provide simple management and
recovery interfaces for all boot environments on a disk, and
1. Providing a convenient interface for rollback of a boot environment becomes
substantially harder if ZFSBootMenu has to identify snapshots across
multiple filesystems that might compose an environment,
2. Even identifying which filesystems should be considered part of an
environment is not always a trivial task, and
3. The problem gets significantly more complex when a system holds multiple
boot environments that might each have multiple sub-mounts.
Keeping the entire operating system contents on a single filesystem avoids
these issues entirely. For the purposes of rolling back snapshots or cloning
one boot environment to another, ZFSBootMenu expects that the environment
consists of exactly one filesystem, so that a snapshot of the filesystem always
presents a consistent view of system state, and rollbacks or clones behave as
expected without the need for manual correlation. If you wish to maintain more
complicated setups, you can always manually manage snapshot rollbacks or clone
operations from the recovery shell that ZFSBootMenu provides.
Note that "coupled system state" does not include "user data" that should
generally survive things like snapshot rollbacks. Recovering from a bad system
update is generally not expected to discard user email or recent database
transactions. For this reason, directories like `/home`, `/var/mail` and others
that hold important data *not* managed by the system **should** reside on
separate filesystems.

View File

@ -1,102 +0,0 @@
# Customized ZFSBootMenu Images from Build Containers - A Quick Start Guide
## Introduction
Official ZFSBootMenu release assets are built within OCI containers based on the [zbm-builder image](https://github.com/zbm-dev/zfsbootmenu/pkgs/container/zbm-builder). The image is built atop Void Linux and provides a predictable environment without the need to install ZFSBootMenu or its dependencies on the host system.
The `zbm-builder.sh` script provides a front-end for integrating custom ZFSBootMenu configurations into the build container without the complexity of directly controlling the container runtime.
Users wishing to build custom ZFSBootMenu images should be familiar with the core concepts of ZFSBootMenu as outlined in the [project README](../README.md). For those interested, the [container README](../releng/docker/README.md) provides more details on the operation of the ZFSBootMenu build container. However, `zbm-builder.sh` seeks to abstract away many of the details discussed in that document.
## Dependencies
To build ZFSBootMenu images from a build container, one of [`podman`](https://podman.io) or [`docker`](https://www.docker.com) is required. The development team prefers `podman`, but `docker` may generally be substituted without consequence.
If a custom build container is desired, [`buildah`](https://buildah.io) and `podman` are generally required. A [`Dockerfile`](../releng/docker/Dockerfile) is provided for convenience, but feature parity with the `buildah` script is not guaranteed. The [container README](../releng/docker/README.md) provides more information about the process of creating a custom build image.
### Podman
Install `podman` and `buildah` (if desired) using the package manager in your distribution:
- On Void, `xbps-install podman buildah`
- On Arch or its derivatives, `pacman -S podman buildah`
- On Debian or its derivatives, `apt-get install podman buildah`
It is possible to configure `podman` for rootless container deployment. Consult the [tutorial](https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md) for details.
### Docker
Install `docker` using the package manager in your distribution:
- On Void, `xbps-install docker`
- On Arch or its derivatives, `pacman -S docker`
- On Debian or its derivatives, `apt-get install docker`
Non-root users that should be permitted to work with Docker images and containers should belong to the `docker` groups. For example,
```sh
usermod -a -G docker zbmuser
```
will add `zbmuser` to the `docker` group on systems that provide the `usermod` program.
### Build Script
The `zbm-builder.sh` script requires nothing more than functional installations of `bash` and one of `podman` or `docker`. Simply download a copy of the script to a convenient directory.
> Advanced users may wish to build images from a local copy of the ZFSBootMenu source tree. To make this possible, either fetch and unpack a source tarball or clone the git repository locally.
## Building a ZFSBootMenu Image
To build a default image, invoke `zbm-builder.sh` with no arguments. For example, from the directory that contains the script, run
```sh
./zbm-builder.sh
```
to produce a default kernel/initramfs pair in the `./build` subdirectory.
The default behavior of `zbm-builder.sh` will:
1. Pull the default builder image, `ghcr.io/zbm-dev/zbm-builder:latest`.
2. If `./hostid` does not exist, copy `/etc/hostid` (if it exists) to `./hostid`.
3. Spawn an ephemeral container from the builder image and run its build process:
1. Bind-mount the working directory into the container to expose local configurations to the builder
2. If `./config.yaml` exists, inform the builder to use that custom configuration instead of the default
3. Run the internal build script to produce output in the `./build` subdirectory
### Custom ZFSBootMenu Hooks
ZFSBootMenu supports [custom hooks](pod/zfsbootmenu.7.pod#options-for-dracut) in three stages:
1. `early_setup` hooks run after the `zfs` kernel driver has been loaded, but before ZFSBootMenu attempts to import any pools.
2. `setup` hooks run after pools are imported, right before ZFSBootMenu will either boot a default environment or present a menu.
3. `teardown` hooks run immediately before ZFSBootMenu will `kexec` the kernel for the selected environment.
When `zbm-builder.sh` runs, it will identify custom hooks as executable files in the respective subdirectories of its build directory:
1. `hooks.early_setup.d`
2. `hooks.setup.d`
3. `hooks.teardown.d`
For each hook directory that contains at least one executable file, `zbm-builder.sh` will write custom configuration snippets for `dracut` and `mkinitcpio` that will include these files in the output images.
> The `mkinitcpio` configuration prepared by `zbm-builder.sh` consists of snippets installed in a `mkinitcpio.d` subdirectory of the build directory. The [default `mkinitcpio` configuration](../etc/zbm-builder/mkinitcpio.conf) includes a loop to source these snippets.
### Fully Customizing Images
The entrypoint for the ZFSBootMenu implements a [tiered configuration approach](../releng/docker/README.md#zfsbootmenu-configuration-and-execution) that allows default configurations to be augmented or replaced with local configurations in the build directory. A custom `config.yaml` may be provided in the working directory to override the default ZFSBootMenu configuration; configuration snippets for `dracut` or `mkinitcpio` can be placed in the `dracut.conf.d` and `mkinitcpio.conf.d` subdirectories, respectively. For `mkinitcpio` configurations, a complete `mkinitcpio.conf` can be placed in the working directory to override the standard configuration.
> The standard `mkinitcpio.conf` in the ZBM build container contains customizations to source snippets in the `mkinitcpio.conf.d`. This is not standard behavior for `mkinitcpio`. If the primary `mkinitcpio.conf` is overridden, this logic may need to be replicated. It is generally better to rely on the default configuration and override portions in `mkinitcpio.conf.d`.
The build container runs its build script from the working directory on the host. In general, relative paths in custom configuration files are generally acceptable and refer to locations relative to the build directory. If absolute paths are preferred or required for some configurations, note that the build directory will be mounted as `/build` in the container.
> The internal build script **always** overrides the output paths for ZFSBootMenu components and UEFI executables to ensure that the images will reside in a specified output directory (or, by default, a `build` subdirectory of build directory) upon completion. Relative paths are primarily useful for specifying local `dracut` or `mkinitcpio` configuration paths.
More advanced users may wish to alter the build process itself. Some control over the build process is exposed through command-line options that are described in the output of
```sh
zbm-builder.sh -h
```
Before adjusting these command-line options, seek a thorough understanding of the [image build process](../releng/docker/README.md) and the command sequence of `zbm-builder.sh` itself.

View File

@ -15,7 +15,8 @@ release = '2.0.0'
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = [
'sphinx_toolbox.collapse',
'sphinx.ext.extlinks',
'sphinx_tabs.tabs',
'sphinx_rtd_theme',
'recommonmark',
]
@ -23,6 +24,16 @@ extensions = [
templates_path = ['_templates']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', '*env', '**/_include']
today_fmt = '%Y-%m-%d'
highlight_language = 'sh'
smartquotes = False
manpages_url = 'https://man.voidlinux.org/{page}.{section}'
# https://www.sphinx-doc.org/en/master/usage/extensions/extlinks.html
extlinks = {
'zbm': (f'https://github.com/zbm-dev/zfsbootmenu/blob/v{release}/%s', '%s'),
}
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output

11
docs/guides/general.rst Normal file
View File

@ -0,0 +1,11 @@
General
=======
.. toctree::
:titlesonly:
general/bootenvs-and-you
general/container-building
general/direct-uefi-booting
general/remote-access
general/portable

View File

@ -0,0 +1,122 @@
Boot Environments and You: A Primer
===================================
ZFSBootMenu adapts to a wide range of system configurations by making as few assumptions about filesystem layout as
possible. When looking for Linux kernels to boot, the *only* requirements are that:
1. At least one ZFS pool be importable,
2. At least one ZFS filesystem on any importable pool have *either* the properties
* ``mountpoint=/`` and **not** ``org.zfsbootmenu:active=off``, or
* ``mountpoint=legacy`` and ``org.zfsbootmenu:active=on``
For filesystems with ``mountpoint=/``, the property ``org.zfsbootmenu:active`` provides a means to opt **out** of
scanning this filesystem for kernels. For filesystems with ``mountpoint=legacy``, ``org.zfsbootmenu:active`` provides
a means to opt **in** to scanning this filesystem for kernels. Filesystems that do not satisfy these conditions are
*never* touched by ZFSBootMenu.
3. At least one of the scanned ZFS filesystems contains a ``/boot`` subdirectory that contains at least one paired
kernel and initramfs.
ZFSBootMenu will present a list of all ZFS filesystems that satisfy these constraints. Additionally, if any filesystem
provides more than one paired kernel and initramfs, it is possible to choose which kernel will be loaded should that
filesystem be selected for booting. (It is, of course, possible to automate the selection of a filesystem and kernel so
that ZFSBootMenu can boot a system without user intervention.)
Finding Kernels
---------------
Although it may be possible to compile a kernel with built-in ZFS support that would be capable of booting from a ZFS
root without an initramfs, this is not standard practice and would require considerable expertise. Consequently,
ZFSBootMenu requires that a kernel be matched with an initramfs image before it will attempt to boot the kernel.
ZFSBootMenu tries hard to identify matched pairs of kernels and initramfs images as installed by a wide range of Linux
distributions. As noted above, kernel and initramfs pairs are required to reside in a ``/boot`` subdirectory of ZFS
filesystem scanned by ZFSBootMenu. The kernel must begin with one of the following prefixes:
* vmlinuz
* vmlinux
* linux
* linuz
* kernel
After the prefix, the name of a kernel may be optionally followed by a hyphen (``-``) and an arbitrary string, which
ZFSBootMenu considers a *version* identifier.
ZFSBootMenu attempts to match one of several possible initramfs names for each kernel it identifies. Broadly, an
initramfs is paired with a kernel when its name matches one of four forms:
* ``initramfs-${label}${extension}``
* ``initramfs${extension}-${label}``
* ``initrd-${label}${extension}``
* ``initrd${extension}-${label}``
The value of ``${extension}`` may be empty or the text ``.img`` and may additionally be followed by one of several
common compression suffixes: ``gz``, ``bz2``, ``xz``, ``lzma``, ``lz4``, ``lzo``, or ``zstd``. The value of
``${label}`` is either:
* The full name of the kernel file with path components removed, *e.g.*, ``vmlinuz-5.15.9_1`` or ``linux-lts``; or
* The version part of a kernel file (if the kernel contains a version part):
* For ``vmlinuz-5.15.9_1``, this is ``5.15.9_1``;
* For ``linux-lts``, this is ``lts``.
ZFSBootMenu prefers the more specific label (the full kernel name) when it exists.
Boot Environments
-----------------
Internally, ZFSBootMenu does not understand the concept of a boot environment. When it finds a suitable kernel and
initramfs pair, it will load them and invoke ``kexec`` to jump into the chosen kernel. In fact, ZFSBootMenu doesn't even
require that a "root" filesystem be the real root that a kernel and initramfs will mount. It would be possible, for
example, to mount a ZFS filesystem at ``/kernels`` and install kernels and matching initramfs images to the
``/kernels/boot`` subdirectory. As long as the ``/kernels`` filesystem has a ``mountpoint`` property (along with
``org.zfsbootmenu:active`` if needed), ZFSBootMenu will identify the kernels even if the filesystem at ``/kernels``
contains nothing besides the ``boot`` subdirectory.
Although ZFSBootMenu ensures maximum flexibility by imposing minimal assumptions on filesystem layout, not all layouts
are equally sensible. For straightforward maintenance and administration, it is recommended that each Linux operating
system that you wish to boot be stored as a self-contained boot environment. Conceptually, the ZFSBootMenu team
recommends that a *boot environment* consist of a **single** ZFS filesystem that contains all of the *coupled* system
state for that environment. Coupled system state embodies the executables, configuration and other files that are
critical to proper system operation and must generally be kept consistent at all times. In most systems, coupled system
state tends to be maintained by a package manager. The package manager might install programs in ``/usr/bin``,
configuration in ``/etc`` and other files throughout the filesystem. The package manager itself probably maintains a
database of installed packages somewhere in ``/var``.
ZFSBootMenu is certainly capable of booting an environment that mounts separate filesystems at ``/`` and other paths
like ``/etc``, ``/usr`` or ``/var``. ZFSBootMenu never needs to understand these details because either the initramfs or
root filesystem will assume responsibility for mounting all filesystems it needs. However, a key benefit of boot
environments is *atomicity*. In general, it is bad to allow the contents of ``/usr`` to become inconsistent with the
package manager database on ``/var``. Configuration files in ``/etc`` can often be tied to specific versions of
software, so they should be kept consistent as well. When these directories live on different filesystems, ensuring
consistency becomes much more challenging.
For example, suppose that a software update has gone wrong and a program has been overwritten by a corrupt or buggy
version. With ZFS snapshots, ``zfs rollback`` is sufficient to restore functionality. However, when ``/usr`` and
``/var`` reside on different filesystems, both must be rolled back to the same point in time. When the filesystems are
on different snapshot schedules (or there is some delay between snapshotting one after the other), deciding which
snapshots represent consistent state may not be a trivial task.
To some extent, this could be remedied with a recursive snapshot scheme that provides uniform nomenclature for
consistent snapshots across multiple filesystems. However, ZFSBootMenu strives to provide simple management and recovery
interfaces for all boot environments on a disk, and
1. Providing a convenient interface for rollback of a boot environment becomes substantially harder if ZFSBootMenu has
to identify snapshots across multiple filesystems that might compose an environment,
2. Even identifying which filesystems should be considered part of an environment is not always a trivial task, and
3. The problem gets significantly more complex when a system holds multiple boot environments that might each have
multiple sub-mounts.
Keeping the entire operating system contents on a single filesystem avoids these issues entirely. For the purposes of
rolling back snapshots or cloning one boot environment to another, ZFSBootMenu expects that the environment consists of
exactly one filesystem, so that a snapshot of the filesystem always presents a consistent view of system state, and
rollbacks or clones behave as expected without the need for manual correlation. If you wish to maintain more complicated
setups, you can always manually manage snapshot rollbacks or clone operations from the recovery shell that ZFSBootMenu
provides.
Note that "coupled system state" does not include "user data" that should generally survive things like snapshot
rollbacks. Recovering from a bad system update is generally not expected to discard user email or recent database
transactions. For this reason, directories like ``/home``, ``/var/mail`` and others that hold important data *not*
managed by the system **should** reside on separate filesystems.

View File

@ -0,0 +1,145 @@
Custom ZFSBootMenu Images from Build Containers
===============================================
.. contents:: Contents
:depth: 2
:local:
:backlinks: none
Introduction
------------
Official ZFSBootMenu release assets are built within OCI containers based on the
`zbm-builder image <https://github.com/zbm-dev/zfsbootmenu/pkgs/container/zbm-builder>`_. The image is built atop Void
Linux and provides a predictable environment without the need to install ZFSBootMenu or its dependencies on the host
system.
The ``zbm-builder.sh`` script provides a front-end for integrating custom ZFSBootMenu configurations into the build
container without the complexity of directly controlling the container runtime.
Users wishing to build custom ZFSBootMenu images should be familiar with the core concepts of ZFSBootMenu as outlined in
the :zbm:`project README <README.md>`. For those interested, the :zbm:`container README <releng/docker/README.md>`
provides more details on the operation of the ZFSBootMenu build container. However, ``zbm-builder.sh`` seeks to
abstract away many of the details discussed in that document.
Dependencies
------------
To build ZFSBootMenu images from a build container, either `podman <https://podman.io>`_ or
`docker <https://www.docker.com>`_ is required. The development team prefers ``podman``, but ``docker`` may generally be
substituted without consequence.
If a custom build container is desired, `buildah <https://buildah.io>`_ and ``podman`` are generally required. A
:zbm:`Dockerfile <releng/docker/Dockerfile>` is provided for convenience, but feature parity with the ``buildah``
script is not guaranteed. The :zbm:`container README <releng/docker/README.md>` provides more information about the
process of creating a custom build image.
Podman
~~~~~~
Install ``podman`` and ``buildah`` (if desired) using the package manager in your distribution:
* On Void, ``xbps-install podman buildah``
* On Arch or its derivatives, ``pacman -S podman buildah``
* On Debian or its derivatives, ``apt-get install podman buildah``
It is possible to configure ``podman`` for rootless container deployment. Consult the
`tutorial <https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md>`_ for details.
Docker
~~~~~~
Install ``docker`` using the package manager in your distribution:
- On Void, ``xbps-install docker``
- On Arch or its derivatives, ``pacman -S docker``
- On Debian or its derivatives, ``apt-get install docker``
Non-root users that should be permitted to work with Docker images and containers should belong to the ``docker``
groups. For example::
usermod -a -G docker zbmuser
will add ``zbmuser`` to the ``docker`` group on systems that provide the ``usermod`` program.
Build Script
~~~~~~~~~~~~
The ``zbm-builder.sh`` script requires nothing more than functional installations of ``bash`` and one of ``podman`` or
``docker``. Simply download a copy of the script to a convenient directory.
Advanced users may wish to build images from a local copy of the ZFSBootMenu source tree. To make this possible, either
fetch and unpack a source tarball or clone the git repository locally.
Building a ZFSBootMenu Image
----------------------------
To build a default image, invoke ``zbm-builder.sh`` with no arguments. For example, from the directory that contains the
script, run ``./zbm-builder.sh`` to produce a default kernel/initramfs pair in the ``./build`` subdirectory.
The default behavior of ``zbm-builder.sh`` will:
1. Pull the default builder image, ``ghcr.io/zbm-dev/zbm-builder:latest``.
2. If ``./hostid`` does not exist, copy ``/etc/hostid`` (if it exists) to ``./hostid``.
3. Spawn an ephemeral container from the builder image and run its build process:
1. Bind-mount the working directory into the container to expose local configurations to the builder
2. If ``./config.yaml`` exists, inform the builder to use that custom configuration instead of the default
3. Run the internal build script to produce output in the ``./build`` subdirectory
Custom ZFSBootMenu Hooks
~~~~~~~~~~~~~~~~~~~~~~~~
ZFSBootMenu supports :ref:`custom hooks <zbm-dracut-options>` in three stages:
1. ``early_setup`` hooks run after the ``zfs`` kernel driver has been loaded, but before ZFSBootMenu attempts to import
any pools.
2. ``setup`` hooks run after pools are imported, right before ZFSBootMenu will either boot a default environment or
present a menu.
3. ``teardown`` hooks run immediately before ZFSBootMenu will ``kexec`` the kernel for the selected environment.
When ``zbm-builder.sh`` runs, it will identify custom hooks as executable files in the respective subdirectories of its
build directory:
1. ``hooks.early_setup.d``
2. ``hooks.setup.d``
3. ``hooks.teardown.d``
For each hook directory that contains at least one executable file, ``zbm-builder.sh`` will write custom configuration
snippets for ``dracut`` and ``mkinitcpio`` that will include these files in the output images.
The ``mkinitcpio`` configuration prepared by ``zbm-builder.sh`` consists of snippets installed in a ``mkinitcpio.d``
subdirectory of the build directory. The :zbm:`default mkinitcpio configuration <etc/zbm-builder/mkinitcpio.conf>`
includes a loop to source these snippets.
Fully Customizing Images
~~~~~~~~~~~~~~~~~~~~~~~~
The entrypoint for the ZFSBootMenu implements a
:zbm:`tiered configuration approach <releng/docker/README.md#zfsbootmenu-configuration-and-execution>`
that allows default configurations to be augmented or replaced with local configurations in the build directory. A
custom ``config.yaml`` may be provided in the working directory to override the default ZFSBootMenu configuration;
configuration snippets for ``dracut`` or ``mkinitcpio`` can be placed in the ``dracut.conf.d`` and ``mkinitcpio.conf.d``
subdirectories, respectively. For ``mkinitcpio`` configurations, a complete ``mkinitcpio.conf`` can be placed in the
working directory to override the standard configuration.
The standard ``mkinitcpio.conf`` in the ZBM build container contains customizations to source snippets in the
``mkinitcpio.conf.d``. This is not standard behavior for ``mkinitcpio``. If the primary ``mkinitcpio.conf`` is
overridden, this logic may need to be replicated. It is generally better to rely on the default configuration and
override portions in ``mkinitcpio.conf.d``.
The build container runs its build script from the working directory on the host. In general, relative paths in custom
configuration files are generally acceptable and refer to locations relative to the build directory. If absolute paths
are preferred or required for some configurations, note that the build directory will be mounted as ``/build`` in the
container.
The internal build script **always** overrides the output paths for ZFSBootMenu components and UEFI executables to
ensure that the images will reside in a specified output directory (or, by default, a ``build`` subdirectory of build
directory) upon completion. Relative paths are primarily useful for specifying local ``dracut`` or ``mkinitcpio``
configuration paths.
More advanced users may wish to alter the build process itself. Some control over the build process is exposed through
command-line options that are described in the output of ``zbm-builder.sh -h``.
Before adjusting these command-line options, seek a thorough understanding of the
:zbm:`image build process <releng/docker/README.md>` and the command sequence of ``zbm-builder.sh`` itself.

View File

@ -0,0 +1,78 @@
UEFI Booting without an intermediate boot manager
=================================================
On most UEFI systems, booting ZFSBootMenu without the use of an intermediate boot manager like rEFInd is possible. Linux
kernels typically include an EFI stub and can be invoked as UEFI executables directly by the firmware. Unfortunately,
while some UEFI implementations allow passing of command-line arguments to the UEFI kernel, others (from Dell, for
example) seem to ignore all configured command-line arguments, making it impossible to specify needed options (such as
the path to the ZFSBootMenu initramfs). Even those implementations that do respect configured arguments may provide no
firmware interface to alter these arguments, which means booting a backup ZFSBootMenu image may not be possible if it
wasn't configured in advance from a Linux installation.
These limitations are easily avoided if ZFSBootMenu is packaged as a *bundled UEFI executable* that encapsulates the
Linux kernel, ZFSBootMenu initramfs and all needed command-line arguments. Dracut facilitates the creation of a bundled
UEFI executable, and the ``generate-zbm`` script exposes this capability.
Creation of a Bundled UEFI Executable
-------------------------------------
The ``EFI`` section of the ZFSBootMenu :doc:`config.yaml </man/generate-zbm.5>`
governs the creation of bundled UEFI executables. The default configuration disables this option; to enable it, set
``EFI.Enabled: true``:
.. code-block:: yaml
EFI:
Enabled: true
The remaining keys in the ``EFI`` section allow control over where and how UEFI bundles are created:
* ``ImageDir`` is the location where the bundle will be written, and should generally be a subdirectory of the ``EFI``
subdirectory of your EFI system partition. The default, ``/boot/efi/EFI/void``, is fine if the ESP is mounted at
``/boot/efi`` (and you are either running Void Linux or don't care if the directory name matches your distribution
name).
* ``Versions`` controls whether UEFI bundles include a version and revision number in their name and, if so, how many
prior versioned executables are retained. Because the firmware is not automatically reconfigured to boot the latest
version after runs of ``generate-zbm``, it is probably best to disabling ``Versions`` by setting its value to ``false``
or ``0``. See the :ref:`description of this key in manual page <config-components>` for more details about its
behavior. Even when versioning is disabled, ``generate-zbm`` still makes a backup of your existing boot image by
replacing its ``.EFI`` extension with ``-backup.EFI`` to provide a fallback.
* ``Stub`` specifies the location of the UEFI stub loader required when creating a bundled executable. Both ``gummiboot``
and its descendant ``systemd-boot`` provide stub loaders; ``gummiboot``, for example, tends to store the loader at
``/usr/lib/gummiboot/linuxx64.efi.stub``. If this key is omitted (as it is by default), ``dracut`` will attempt to
find either the ``systemd-boot`` or ``gummiboot`` version at their expected locations. This key is useful when
automatic detection fails.
In addition, two options in the ``Kernel`` section of the configuration file are used during bundle creation:
* ``Prefix`` provides the base name for the output bundle file. If this is omitted, the base name will be derived from
the name of the kernel used to create the image; for example, the kernel ``/boot/vmlinuz-<version>`` will produce a
bundle called ``vmlinuz.EFI`` in the configured ``ImageDir``, while the kernel ``/boot/vmlinuz-lts-<version>`` will
produce a bundle called ``vmlinuz-lts.EFI``.
* ``CommandLine`` provides the command-line arguments that will be encoded in the bundle and passed to the kernel during
boot. The ``dracut`` configuration option ``kernel_cmdline`` also provides a mechanism for encoding the kernel
command-line; if the ZFSBootMenu configuration specifies ``Kernel.CommandLine`` and the ``dracut`` configuration for
ZFSBootMenu specifies ``kernel_cmdline``, the two values will be concatenated.
After adjusting the configuration options as desired, run ``generate-zbm`` and a bundled UEFI executable will be created
in ``EFI.ImageDir``.
Booting the Bundled Executable
------------------------------
The `efibootmgr <https://github.com/rhinstaller/efibootmgr>`_ utility provides a means to configure your firmware to
boot the bundled executable. For example::
efibootmgr -c -d /dev/sda -p 1 -L "ZFSBootMenu" -l \\EFI\\VOID\\VMLINUZ.EFI
will create a new entry that will boot the executable written to ``/boot/efi/EFI/void/vmlinuz.EFI`` if your EFI system
partition is ``/dev/sda1`` and is mounted at ``/boot/efi``. (Remember that the EFI system partition should be a FAT
volume, so the path separators are backslashes and paths should be case-insensitive.) For good measure, create an
alternative entry that points at the backup image::
efibootmgr -c -d /dev/sda -p 1 -L "ZFSBootMenu (Backup)" -l \\EFI\\VOID\\VMLINUZ-BACKUP.EFI
The firmware should provide some means to select between these alternatives.
It is also generally possible to configure the boot sequence from your firmware setup interface. Simply find and select
the path to the bundled EFI executable from this interface.

View File

@ -0,0 +1,39 @@
Portable ZFSBootMenu
====================
UEFI makes it easy to deploy ZFSBootMenu without a local installation. Most
UEFI systems will search for and run an EFI executable at the path
``/EFI/BOOT/BOOTX64.EFI`` on a FAT-formatted `EFI System Partition`_
located on any disk that the firmware is told to boot. This executable can be a
standard ZFSBootMenu release image or a custom, locally generated image. Almost
any modern system can be made to launch a ZFSBootMenu instance just by
inserting and booting from a minimally configured USB drive.
.. _EFI System Partition: https://en.wikipedia.org/wiki/EFI_system_partition
Procedure
---------
1. On a USB drive, create a GPT header.
2. Create an `EFI system partition`_ on the drive. The partition should be at
least 100 MB.
* With `gdisk <https://man.voidlinux.org/gdisk.8>`_, this is accomplished by
setting the parition type to ``EF00``.
* With `parted <https://man.voidlinux.org/parted.8>`_, this is accomplished
by setting the ``boot`` flag on the partition.
3. Format the partition as FAT.
4. Fetch a copy of the ZFSBootMenu release image:
.. code-block:: sh
curl -LJO https://get.zfsbootmenu.org/efi
5. Save the resulting download as ``EFI/BOOT/BOOTX64.EFI`` within the EFI
system partiton.
6. Tell your system to boot from the USB drive.

View File

@ -0,0 +1,345 @@
Remote Access to ZFSBootMenu
============================
.. contents:: Contents
:depth: 2
:local:
:backlinks: none
Having SSH access to ZFSBootMenu can be critical because it allows some measure of recovery over a remote connection. If
your boot environments reside in encrypted filesystems, SSH access is necessary if you ever intend to reboot a machine
when you are not physically present. Because ZFSBootMenu supports Dracut and mkinitcpio, any mechanism that can provide
remote access to a Dracut or mkinitcpio initramfs will work.
Dracut
------
The `dracut-crypt-ssh <https://github.com/dracut-crypt-ssh/dracut-crypt-ssh>`_ provides a straightforward approach to
configuring and launching an SSH server in Dracut images. The module is packaged in Void and does not rely on
``systemd`` within the initramfs. If you run a distribution that does not package ``dracut-crypt-ssh``, you will need to
track down its dependencies. The ``dracut-network`` module and ``dropbear`` are required to provide network access and
an SSH server, respectively; other prerequisites are probably already installed on your system.
Simplified Installation Instructions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``dracut-crypt-ssh`` package comes with a few helper utilities in the ``module/60crypt-ssh/helper`` directory that
are designed to simplify providing passwords and snooping console output so that you can interact with unlock processes
that are already running in the initramfs. These components are not required for ZFSBootMenu and do not provide a lot of
value. If you have no problems installing the package as intended, it is OK to leave the helpers installed. If your
distribution has trouble compiling the helpers, just copy the contents of the ``60crypt-ssh`` directory, less the
``helper`` directory and ``Makefile``, to the modules directory for Dracut. This will most likely be
``/usr/lib/dracut/modules.d/60crypt-ssh``.
If you do not install the contents of ``helper``, you may wish to edit the ``module-setup.sh`` script provided by the
package to remove references to installing the helper. At the time of writing, these references consist of the last four
lines (five, if you count the harmless comment) of the ``install()`` functioned. Removing these lines should not be
critical, as Dracut should happily continue the initramfs creation process even if those installation commands fail.
If you use Dracut to produce the initramfs images in your boot environment, you may wish to disable the ``crypt-ssh``
module in those images. Just add
.. code-block::
omit_dracutmodules+=" crypt-ssh "
to a configuration file in ``/etc/dracut.conf.d``. The configuration file must have a ``.conf`` extension to be
recognized; see `dracut.conf(5) <https://man.voidlinux.org/dracut.conf.5>`_ for more information.
Configuring Dropbear in ZFSBootMenu
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, ``dracut-crypt-ssh`` will generate random host keys for your ZFSBootMenu initramfs. This is undesirable
because SSH will complain about unknown keys every time you reboot. If you wish, you can configure the module to copy
your regular host keys into the image. However, there are two problems with this:
1. The ZFSBootMenu image will generally be installed on a filesystem with no access permissions, allowing anybody to
read your private host keys; and
2. The ``dropbearconvert`` program may be incapable of converting modern OpenSSH host keys into the required dropbear
format.
To create dedicated host keys in the proper format, decide on a location, for example ``/etc/dropbear``, and create the
new keys::
mkdir -p /etc/dropbear
ssh-keygen -t rsa -m PEM -f /etc/dropbear/ssh_host_rsa_key
ssh-keygen -t ecdsa -m PEM -f /etc/dropbear/ssh_host_ecdsa_key
The module expects to install RSA and ECDSA keys, so both types are created here.
.. note::
When prompted for a passphrase when creating each host key, leave it blank. A non-empty password will prevent dropbear
from reading a key.
To inform ``dracut-network`` that it must bring up a network interface, pass the kernel command-line parameters
``ip=dhcp`` and ``rd.neednet=1`` to your ZFSBootMenu image. If you use another boot loader to start ZFSBootMenu, *e.g.*
rEFInd or syslinux, this can be accomplished by configuring that loader. However, it may be more convenient to add these
parameters directly to the ZFSBootMenu image::
mkdir -p /etc/cmdline.d
echo "ip=dhcp rd.neednet=1" > /etc/cmdline.d/dracut-network.conf
It is possible to specify a static IP configuration by replacing ``dhcp`` with a properly formatted configuration
string. Consult the `dracut documentation <https://man.voidlinux.org/dracut.cmdline.7#Network>`_ for details about
static IP configuration.
There are methods besides writing to ``/etc/cmdline.d`` or configuring another boot loader to specify kernel
command-line arguments that will configure networking in Dracut. However, Dracut uses the ``/etc/cmdline.d`` directory
to store "fake" arguments, which it processes directly rather than handing to the kernel. In my tests, using other
methods (like adding these arguments to the ``kernel_cmdline`` Dracut option for a UEFI bundle) can cause the
``ip=dhcp`` argument to appear more than once on the kernel command-line, which may cause ``dracut-network`` to fail
catastrophically and refuse to boot. Writing a configuration file in ``/etc/cmdline.d`` is a reliable way to ensure that
``ip=dhcp`` appears exactly once to ``dracut-network``.
With critical pieces in place, ZFSBootMenu can be configured to bundle ``dracut-crypt-ssh`` in its images. Create the
Dracut configuration file ``/etc/zfsbootmenu/dracut.conf.d/dropbear.conf`` with the following contents::
# Enable dropbear ssh server and pull in network configuration args
add_dracutmodules+=" crypt-ssh "
install_optional_items+=" /etc/cmdline.d/dracut-network.conf "
# Copy system keys for consistent access
dropbear_rsa_key=/etc/dropbear/ssh_host_rsa_key
dropbear_ecdsa_key=/etc/dropbear/ssh_host_ecdsa_key
# User zbmuser is the authorized unlocker here
dropbear_acl=/home/zbmuser/.ssh/authorized_keys
The last line is optional and assumes the user ``zbmuser`` should provide an ``authorized_keys`` file that will
determine remote access to the ZFSBootMenu image. The ``dracut-crypt-ssh`` module does not allow for password
authentication over SSH; instead, key-based authentication is forced. By default, the list of authorized keys is taken
from ``/root/.ssh/authorized_keys`` on the host. If you would prefer to use the ``authorized_keys`` file from another
user on your system, copy the above example and replace ``zbmuser`` with the name of the user whose ``authorized_keys``
you wish to include.
.. note::
The default configuration will start dropbear on TCP port 222. This can be overridden with the ``dropbear_port``
configuration option. Generally, you do not want the server listening on the default port 22. Clients that expect to
find your normal host keys when connecting to an SSH server on port 22 will refuse to connect when they find different
keys provided by dropbear.
Unless you've taken steps not described here, the network-enabled ZFSBootMenu image will not advertise itself via
dynamic DNS or mDNS. You will need to know the IP address of the ZFSBootMenu host to connect. Thus, you should either
configure a static IP address in ``/etc/cmdline.d/dracut-network.conf`` or configure your DHCP server to reserve a known
address for the MAC address of the network interface you configure for ``dracut-crypt-ssh``.
mkinitcpio
----------
ZFSBootMenu also supports the `mkinitcpio <https://gitlab.archlinux.org/archlinux/mkinitcpio/mkinitcpio/>`_ initramfs
generator used by Arch Linux.
ZFSBootMenu Configuration Changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Since `version 2.0.0 <https://github.com/zbm-dev/zfsbootmenu/releases/tag/v2.0.0>`_, ZFSBootMenu will install a standard
:zbm:`mkinitcpio.conf <etc/zfsbootmenu/mkinitcpio.conf>` in the ``/etc/zfsbootmenu`` configuration directory. This file
is generally the same as a standard ``mkinitcpio.conf``, except some additional declarations may be added to control
aspects of the ``zfsbootmenu`` mkinitcpio module. The configuration file includes extensive inline documentation in the
form of comments; configuration options specific to ZFSBootMenu are also described in the
:ref:`zfsbootmenu(7) <zbm-mkinitcpio-options>` manual page.
ZFSBootMenu still expects to use dracut by default. To override this behavior and instead use mkinitcpio, edit
``/etc/zfsbootmenu/config.yaml`` and add the following options:
.. code-block:: yaml
Global:
InitCPIO: true
## NOTE: The following three lines are OPTIONAL
InitCPIOHookDirs:
- /etc/zfsbootmenu/initcpio
- /usr/lib/initcpio
.. note::
In the examples below, a couple of mkinitcpio modules will be installed to ``/etc/zfsbootmenu/initcpio`` to keep them
isolated from system-installed modules. To accommodate this non-standard installation, ``InitCPIOHookDirs`` must be
defined in ``/etc/zfsbootmenu/config.yaml``. Furthermore, because overriding the hook directory causes mkinitcpio to
ignore its default module path, the default ``/usr/lib/initcpio`` must be manually specified. If all hooks are
installed in ``/usr/lib/initcpio`` or ``/etc/initcpio``, the ZFSBootMenu configuration does **not** need to specify
``InitCPIOHookDirs``.
Without further changes, running ``generate-zbm`` should now produce a ZBM image based on mkinitcpio rather than dracut,
although it will lack networking and remote-access capabilities. (By default, ``generate-zbm`` instructs mkinitcpio to
use the configuration at ``/etc/zfsbootmenu/config.yaml``, although this can be changed in the ``generate-zbm``
configuration file.) For these features, some additional mkinitcpio modules and configuration changes are necessary.
Because further configuration will require additional mkinitcpio modules, and these must be run before the
``zfsbootmenu`` module in the initramfs, edit ``/etc/zfsbootmenu/mkinitcpio.conf`` and **remove** any ``zfsbootmenu``
entry in the ``HOOKS`` definition. As the standard configuration file notes, the ``zfsbootmenu`` module is required for
ZFSBootMenu to function, but ``generate-zbm`` will forcefully at this at the end of the module list. Thus, the simplest
way to ensure that additions to the ``HOOKS`` array occur *before* the ``zfsbootmenu`` module is to omit the latter from
the configuration. The standard ``HOOKS`` line in ``/etc/zfsbootmenu/mkinitcpio.conf`` should therefore be something
like::
HOOKS=(base udev autodetect modconf block filesystems keyboard)
Basic Network Access
~~~~~~~~~~~~~~~~~~~~
Network access in a mkinitcpio image can be realized in one of several ways. In Arch Linux, for example, the
`mkinitcpio-nfs-utils <https://archlinux.org/packages/?name=mkinitcpio-nfs-utils>`_ package provides a
`net module <https://wiki.archlinux.org/title/Mkinitcpio#Using_net>`_ that allows the initramfs to parse ``ip=``
directives from the kernel command line. When a static IP configuration is sufficient, the
`mkinitcpio-rclocal <https://github.com/ahesford/mkinitcpio-rclocal>`_ module allows user scripts to be injected at
several points in the initramfs boot process and provides a simple mechanism for configuring a network interface.
When installing mkinitcpio modules that are not provided by a system package manager, it may be preferable to keep them
isolated from the ordinary module tree. Because this module will only be required in ZBM images, placing extra modules
in ``/etc/zfsbootmenu/initcpio`` is convenient::
curl -L https://github.com/ahesford/mkinitcpio-rclocal/archive/master.tar.gz | tar -zxvf - -C /tmp
mkdir -p /etc/zfsbootmenu/initcpio/{install,hooks}
cp /tmp/mkinitcpio-rclocal-master/rclocal_hook /etc/zfsbootmenu/initcpio/hooks/rclocal
cp /tmp/mkinitcpio-rclocal-master/rclocal_install /etc/zfsbootmenu/initcpio/install/rclocal
rm -r /tmp/mkinitcpio-rclocal-master
Next, create an ``rc.local`` script that can be run within the mkinitcpio image to configure the ``eth0`` interface::
cat > /etc/zfsbootmenu/initcpio/rc.local <<RCEOF
#!/bin/sh
# Don't attempt to configure an interface that does not exist
ip link show dev eth0 >/dev/null 2>&1 || exit
# Bring up the interface
ip link set dev eth0 up
# Configure a static address for this host
ip addr add 192.168.1.2/24 brd + dev eth0
ip route add default via 192.168.1.1
# Add some name servers
cat > /etc/resolv.conf <<-EOF
nameserver 1.1.1.1
nameserver 8.8.8.8
EOF
RCEOF
.. note::
If your Ethernet interface is called something other than ``eth0`` or your static IP configuration is different,
adjust the script as needed.
To ensure that the ``rclocal`` module is installed and run in the ZBM image, either append ``rclocal`` to the array
defined on the ``HOOKS`` line in ``/etc/zfsbootmenu/mkinitcpio.conf`` or run
.. code-block::
sed -e '/HOOKS=/a HOOKS+=(rclocal)' -i /etc/zfsbootmenu/mkinitcpio.conf
The ``rclocal`` module should be told where it can find the ``rc.local`` script to install and run by running::
echo 'rclocal_hook=/etc/zfsbootmenu/initcpio/rc.local' >> /etc/zfsbootmenu/mkinitcpio.conf
Finally, make sure to include the ``ip`` executable in your initramfs image by manually adding ``ip`` to the
``BINARIES`` array in ``/etc/zfsbootmenu/mkinitcpio.conf`` or by running
.. code-block::
sed -e '/BINARIES=/a BINARIES+=(ip)' -i /etc/zfsbootmenu/mkinitcpio.conf
Dropbear
~~~~~~~~
Arch Linux provides a `mkinitcpio-dropbear <https://archlinux.org/packages/community/any/mkinitcpio-dropbear/>`_ package
that provides a straightforward method for installing, configuring and running the dropbear SSH server inside a
mkinitcpio image. This package is based on a
`project of the same name <https://github.com/grazzolini/mkinitcpio-dropbear>`_ by an Arch Linux developer. A
`fork of the mkinitcpio-dropbear project <https://github.com/ahesford/mkinitcpio-dropbear>`_ contains a few minor
improvements in runtime configuration and key management. If these improvements are not needed, using the upstream
project is perfectly acceptable.
Once again, the mkinitcpio module must first be downloaded and installed::
curl -L https://github.com/ahesford/mkinitcpio-dropbear/archive/master.tar.gz | tar -zxvf - -C /tmp
mkdir -p /etc/zfsbootmenu/initcpio/{install,hooks}
cp /tmp/mkinitcpio-dropbear-master/rclocal_hook /etc/zfsbootmenu/initcpio/hooks/dropbear
cp /tmp/mkinitcpio-dropbear-master/rclocal_install /etc/zfsbootmenu/initcpio/install/dropbear
rm -r /tmp/mkinitcpio-dropbear-master
The upstream ``dropbear`` module will attempt to copy host OpenSSH keys into ``/etc/dropbear`` if possible; otherwise,
it will generate random host keys. Both options are undesirable. Copying host keys will leave these protected files
directly accessible to anybody able to read a ZFSBootMenu image, which is probably every user on the system. Generating
unique keys with each run inhibits your ability to detect interlopers when you connect to your bootloader via SSH. My
fork will, by default, respect any existing dropbear keys available as ``/etc/dropbear/dropbear_*_host_key``. Therefore,
make some new host keys for use in your ZFSBootMenu image::
mkdir -p /etc/dropbear
for keytype in rsa ecdsa ed25519; do
dropbearkey -t "${keytype}" -f "/etc/dropbear/dropbear_${keytype}_host_key"
done
The module also requires, at ``/etc/dropbear/root_key``, a set of authorized SSH keys that will be given access to the
``root`` account in the image. On a single-user system, it is sufficient to do::
ln -s ${HOME}/.ssh/authorized_keys /etc/dropbear/root_key
assuming that ``${HOME}`` points to the home directory of the user who should be given access to ZFSBootMenu.
Finally, enable the ``dropbear`` module in ``/etc/zfsbootmenu/mkinitcpio.conf`` by manually appending ``dropbear`` to
the ``HOOKS`` array, or by running::
sed -e '/HOOKS.*rclocal/a HOOKS+=(dropbear)' -i /etc/zfsbootmenu/mkinitcpio.conf
Final Steps
~~~~~~~~~~~
With the above configuration complete, running ``generate-zbm`` should produce a ZFSBootMenu image that contains the
necessary components to enable an SSH server in your bootloader. This can be verified with the ``lsinitrd`` tool
provided by dracut or the ``lsinitcpio`` tool provided by mkinitcpio. (The ``lsinitcpio`` tool is not able to inspect
UEFI bundles, but ``lsinitrd`` can.) In the file listing, you should see keys in ``/etc/dropbear``, the ``dropbear`` and
``ip`` executables, and the file ``root/.ssh/authorized_keys``.
After rebooting, ZFSBootMenu should configure the network interface, launch an SSH server and accept connections on TCP
port 22 by default. If your SSH client complains because it finds ZFSBootMenu keys when it expects to find your normal
host keys, you may wish to reconfigure dropbear to listen on a non-standard port. My fork of ``mkinitcpio-dropbear``
supports this by writing a ``dropbear_listen`` definition to ``/etc/dropbear/dropbear.conf``::
echo 'dropbear_listen=2222' > /etc/dropbear/dropbear.conf
After writing this file (adjust ``2222`` to whatever port you prefer), re-run ``generate-zbm``, reboot and confirm that
dropbear listens where expected.
Accessing ZFSBootMenu Remotely
------------------------------
When you connect to ZFSBootMenu via SSH, you will be presented a simple shell prompt. Launch ``zfsbootmenu`` to start
the menu interface over the remote connection::
zfsbootmenu
You may then use the menu as if you were connected locally.
.. note::
recent versions of ZFSBootMenu automatically set the ``TERM`` environment variable to ``linux``. If you are running an
older version, your SSH client may have provided a more specific terminal definition that will not be recognized by
the restricted environment provided by ZFSBootMenu. Under these circumstances, you may need to run::
export TERM=linux
from the login shell to ensure that basic terminal functionality works as expected.
If you followed the :doc:`Void Linux ZFSBootMenu install guide </guides/void-linux/single-disk-uefi>` and configured
rEFInd to launch ZFSBootMenu, you may need to remove the ``zbm.skip`` argument from the default menu entry if you would
like remote access and you have no encrypted boot environments. Otherwise, rEFInd will attempt to bypass the ZFSBootMenu
countdown and your default boot environment will be started immediately if possible. In this case, either set
``zbm.timeout`` to a suitably long delay (*e.g.*, 60 sec) to give yourself time to connect and launch ZFSBootMenu
remotely before the automatic boot can proceed, or use ``zbm.show`` by default to prevent automatic boot and force the
local instance to show the interactive menu immediately.
.. note::
To provide some safety against multi-user conflicts, only one ZFSBootMenu instance is allowed to run at any given
time. If you have encrypted boot environments, this will generally not present an issue, because the local instance
will always block awaiting passphrase entry before launching the menu instance. Otherwise, the later instance of
ZFSBootMenu will wait patiently for the earlier instance to terminate before continuing. If you are *certain* that the
currently running instance is not being actively used, you can interrupt the wait loop by pressing ``[ESC]`` and then
run::
rm /zfsbootmenu/active
to eliminate the indicator of the other running instance. You may then run ``zfsbootmenu`` again to launch the menu.