An open API service indexing awesome lists of open source software.

https://github.com/crichez/crichez.secureboot

Ansible roles to configure secure boot.
https://github.com/crichez/crichez.secureboot

ansible ansible-role dracut fedora kernel-install secure-boot secureboot systemd-ukify uki ukify unified-kernel-image

Last synced: 2 months ago
JSON representation

Ansible roles to configure secure boot.

Awesome Lists containing this project

README

        

# crichez.secureboot

This repository contains Ansible roles to configure secure boot. Only one role
is currently provided: `kernel`.

## `crichez.secureboot.kernel`

The `kernel` role configures the target host to boot from a unified kernel
image signed by a local machine owner key.

### Host requirements

1. The distribution uses systemd (no Alpine or BSDs, sorry :/)
2. Secure boot is enabled
3. The machine owner key is stored in a NSS database (see
[efikeygen](https://manpages.debian.org/testing/pesign/efikeygen.1.en.html))
4. The machine owner key is enrolled in MokListRT (see
[mokutil](https://manpages.debian.org/testing/mokutil/mokutil.1.en.html))
5. The following packages are installed:
* pesign
* mokutil
* systemd-boot
* systemd-ukify
* python3-cryptography
* python3-virt-firmware
* python3-importlib-resources (seems needed only on Debianish somehow)

> [!CAUTION]
> Although systemd-boot must be installed, the actual bootloader
> must not be configured. On Debian-like platforms, you will likely need to
> reconfigure your bootloader after installing systemd-boot. For GRUB, use
> `sudo grub-update`. For other bootloaders, consult their documentation.
> Failing to do so could make the host unable to reboot after an error. On
> RedHat platforms, the systemd-boot package does not seem to break things.

### Controller requirements

1. On Debian-like platforms, `git` must be available
2. The following collections are available:
* `ansible.builtin`
* `community.general`
* `community.crypto`

### Usage

By default, the role assumes the machine owner key is stored in the NSS
database at `/etc/pki/pesign`, and its nickname/friendly name is `mok`.
If this is the case for you, you can include the role without any extra vars:
```yaml
- name: Reboot from a signed unified kernel image
ansible.builtin.include_role:
name: kernel
```

The role's arguments support a custom NSS database path, and MOK nickname:
```yaml
- name: Reboot from a signed unified kernel image
ansible.builtin.include_role:
name: kernel
vars:
kernel_pesign_db: /etc/my/database/path
kernel_pesign_key: 'Bob Dorough Secure Boot Signing Key'
```

**Change reporting:**

The role only reports changes that are relevant to its *intent.* Temporary
directory creation for example does not report a change, but changing /etc
files will report a change *even if* the UKI was not rebuilt.

**Error recovery:**

If an error occurs during the configuration process, changes are reverted.
If an error occurs during recovery, please file a bug report issue as I really
strive to avoid bricking your system 🙂.

To recover manually on RedHat:
```sh
# Remove kernel-install customizations
sudo rm -f /etc/kernel/install.conf

# Reinstall kernel
sudo kernel-install add
```

To recover manually on Debian:
```sh
# Remove postinst.d and postrm.d scripts
sudo rm -f /etc/kernel/postinst.d/zz-kernel-install
sudo rm -f /etc/kernel/postinst.d/zz-kernel-uninstall

# Reinstall kernel
sudo apt reinstall linux-image-$(uname -r)
```

### Limitations

On Debian-like platforms, some components of the `virt-firmware` package will
not be updated on `apt upgrade`:
* The script at `/etc/kernel/install.d/99-uki-uefi-setup.install`
* The unit at `/etc/systemd/system/kernel-bootcfg-boot-successful.service`

If you use unattended-upgrades, please mind your filters to avoid upgrading
virt-firmware inadvertently and breaking your next kernel upgrade.
These files can be updated to the version matching your package from
[source](https://gitlab.com/kraxel/virt-firmware), or by running the role
again after `apt upgrade python3-virt-firmware`.

## Testing

The test environment is really bulky because we need to emulate UEFI firmware,
which is only available in virtual machines as far as I know. I am very open to
suggestions in this regard. Be ready to spend ~10 minutes on your first test
run if you don't have access to KVM.

### Dependencies

**The following system tools are required:**
1. libvirt
2. qemu-system-x86_64
3. libvirt-dev (Debian) / libvirt-devel (RedHat)
4. libvirt-python (can be venv-only, but mind version mismatches with 3.)
5. xorriso
6. qemu-img
7. openssl
8. GNU make (technically optional, but strongly recommended)

**The following python libraries are required** (see requirements.txt):
1. ansible
2. virt-firmware
3. lxml
4. cryptography

> [!IMPORTANT]
> *If testing on a Darwin aarch64 host*, networking depends on
> [socket_vmnet](https://github.com/lima-vm/socket_vmnet). The recommended
> installation method is via homebrew, and is the only time you need to be
> root:
> ```sh
> brew install socket_vmnet
> sudo brew services start socket_vmnet
> ```

### Configuration

The file at `tests/integration/targets/role_uki/vars/platforms.yml` defines the
`platforms` variable. It is a list of dictionaries that describe each platform
the role will be tested against.
```yaml
platforms:
# The name of the platform. In this example the system's image is
# downloaded to '.cache/Fedora_41.qcow2'. Platform-specific files are kept
# in '.build/Fedora_41/'. The libvirt machine's name is 'SB_Fedora_41'. The
# name of the host in the generated inventory file is 'Fedora_41'.
- name: Fedora_41
# The URL to a qcow2 cloud image for this platform. The image is only
# downloaded if a file at '.cache/Fedora_41.qcow2' doesn't exist. For custom
# images (i.e. RHEL_10) or other trickery, just write the file yourself and
# this key can be omitted.
url: 'https://download.fedoraproject.org/path/to/qcow2/cloud/image'
# A random MAC address. This is only used internally for vm ip discovery.
mac_address: "{{ '54:52:00' | community.general.random_mac }}"
# Packages that will be installed by cloud-init before the vm becomes
# available to the host/controller. This is very slow, so I recommend using
# regular ansible modules wherever you can. Sadly in the case of Fedora_41,
# this is required.
init_packages:
- python3-libdnf5
```

### Running the test suite

There are two playbooks that must run in order:
1. `tests/integration/targets/role_uki/setup.yml` defines, starts, and
configures the test machines using virt-firmware, xorriso, and libvirt.
This playbook will be run on the **host** and only needs a `srcdir` variable
that corresponds to the absolute path of the project root directory.
2. `.build/test.yml` installs dependencies and runs the role several times
on each test machine, each time with different success and failure
conditions. This file is copied into the .build directory from
`tests/integration/targets/role_uki/test.yml` for role discovery. This
playbook also needs a `scrdir` variable, the SSH key file at
`.build/id_ed25519`, and the inventory file at `.build/inventory.yml`.

For your convenience, `make test` does these two things exactly. The setup
playbook takes time on first run, then finishes quickly on subsequent runs.
If you'd like to run these playbooks directly, consult the Makefile.

### Resetting between tests

The playbook at `tests/integration/targets/role_uki/teardown.yml` destroys and
undefines test machines, forgets their host keys, and removes built artifacts.
It does not remove downloaded platform images in `.cache`, so you can reset the
test environment without downloading 3GB worth of images each time.