{"id":14965335,"url":"https://github.com/opensuse/yomi","last_synced_at":"2025-10-25T11:31:27.440Z","repository":{"id":35612852,"uuid":"167549970","full_name":"openSUSE/yomi","owner":"openSUSE","description":"Yet one more installer","archived":false,"fork":false,"pushed_at":"2024-07-24T17:05:28.000Z","size":484,"stargazers_count":41,"open_issues_count":1,"forks_count":7,"subscribers_count":15,"default_branch":"master","last_synced_at":"2025-01-31T07:04:06.476Z","etag":null,"topics":["installer","linux","opensuse","saltstack"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/openSUSE.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-01-25T13:15:45.000Z","updated_at":"2025-01-29T13:04:48.000Z","dependencies_parsed_at":"2024-07-24T19:43:57.388Z","dependency_job_id":null,"html_url":"https://github.com/openSUSE/yomi","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openSUSE%2Fyomi","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openSUSE%2Fyomi/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openSUSE%2Fyomi/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openSUSE%2Fyomi/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/openSUSE","download_url":"https://codeload.github.com/openSUSE/yomi/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":238128552,"owners_count":19421053,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["installer","linux","opensuse","saltstack"],"created_at":"2024-09-24T13:34:36.190Z","updated_at":"2025-10-25T11:31:21.995Z","avatar_url":"https://github.com/openSUSE.png","language":"Python","readme":"# Yomi - Yet one more installer\n\nTable of contents\n=================\n* [Yomi - Yet one more installer](#yomi---yet-one-more-installer)\n   * [What is Yomi](#what-is-yomi)\n   * [Overview](#overview)\n   * [Installing and configuring salt-master](#installing-and-configuring-salt-master)\n      * [Other ways to install salt-master](#other-ways-to-install-salt-master)\n      * [Looking for the pillar](#looking-for-the-pillar)\n      * [Enabling auto-sign](#enabling-auto-sign)\n      * [Salt API](#salt-api)\n   * [The Yomi formula](#the-yomi-formula)\n      * [Looking for the pillar in Yomi](#looking-for-the-pillar-in-yomi)\n      * [Enabling auto-sign in Yomi](#enabling-auto-sign-in-yomi)\n      * [Salt API in Yomi](#salt-api-in-yomi)\n         * [Real time monitoring in Yomi](#real-time-monitoring-in-yomi)\n   * [Booting a new machine](#booting-a-new-machine)\n      * [The ISO image](#the-iso-image)\n      * [PXE Boot](#pxe-boot)\n      * [Finding the master node](#finding-the-master-node)\n      * [Setting the minion ID](#setting-the-minion-id)\n      * [Adding user provided configuration](#adding-user-provided-configuration)\n      * [Container](#container)\n   * [Basic operations](#basic-operations)\n      * [Getting hardware information](#getting-hardware-information)\n      * [Configuring the pillar](#configuring-the-pillar)\n      * [Cleaning the disks](#cleaning-the-disks)\n      * [Applying the yomi state](#applying-the-yomi-state)\n   * [Pillar reference for Yomi](#pillar-reference-for-yomi)\n      * [config section](#config-section)\n      * [partitions section](#partitions-section)\n      * [lvm section](#lvm-section)\n      * [raid section](#raid-section)\n      * [filesystems section](#filesystems-section)\n      * [bootloader section](#bootloader-section)\n      * [software section](#software-section)\n      * [suseconnect section](#suseconnect-section)\n      * [salt-minion section](#salt-minion-section)\n      * [services section](#services-section)\n      * [networks section](#networks-section)\n      * [users section](#users-section)\n\n# What is Yomi\n\nYomi (yet one more installer) is a new proposal for an installer for\nthe [open]SUSE family. It is designed as a\n[SaltStack](https://www.saltstack.com/) state, and expected to be used\nin situations were unattended installations for heterogeneous nodes is\nrequired, and where some bits of intelligence in the configuration\nfile can help to customize the installation.\n\nBeing also a Salt state makes the installation process one more step\nduring the provisioning stage, making on Yomi a good candidate for\nintegration in any workflow were SaltStack is used.\n\n\n# Overview\n\nTo execute Yomi we need a modern version of Salt, as we need special\nfeatures are only on the\n[master](https://github.com/saltstack/salt/tree/master) branch of\nSalt. Technically we can use the last released version of Salt for\nsalt-master, but for the minions we need the most up-to-date\nversion. The good news is that most of the patches are currently\nmerged in the openSUSE package of Salt.\n\nYomi is developed in\n[OBS](https://build.opensuse.org/project/show/systemsmanagement:yomi),\nand actually consists on two components:\n\n* [yomi-formula](https://build.opensuse.org/package/show/systemsmanagement:yomi/yomi-formula):\n  contains the Salt states and modules requires to drive an\n  installation. The [source code](https://github.com/openSUSE/yomi) of\n  the project in available under the openSUSE group in GitHub.\n* [openSUSE-Tubleweed-Yomi](https://build.opensuse.org/package/show/systemsmanagement:yomi/openSUSE-Tumbleweed-Yomi):\n  is the image that can be used too boot the new nodes, that includes\n  the `salt-minion` service already configured. There are two versions\n  of this image, one that is used as a LiveCD image and other designed\n  to be used from a PXE Boot server.\n\nThe installation process of Yomi will require:\n\n* Install and configure the\n  [`salt-master`](#installing-and-configuring-salt-master) service.\n* Install the [`yomi-formula`](#the-yomi-formula) package.\n* Prepare the [pillar](#pillar-in-yomi) for the new installations.\n* Boot the new systems with the [ISO image](#the-iso-image) or via\n  [PXE boot](#pxe-boot)\n\nCurrently Yomi support the installation under x86_64 and ARM64\n(aarch64) with EFI.\n\n\n# Installing and configuring salt-master\n\nSaltStack can be deployed with different architectures. The\nrecommended one will require the `salt-master` service.\n\n```bash\nzypper in salt-master\n\nsystemctl enable --now salt-master.service\n```\n\n## Other ways to install salt-master\n\nFor different ways of installation, read the [official\ndocumentation](https://docs.saltstack.com/en/latest/topics/installation/index.html). For\nexample, for development purposes installing it inside a virtual\nenvironment can be a good idea:\n\n```bash\npython3 -mvenv venv\n\nsource venv/bin/activate\n\npip install --upgrade pip\npip install salt\n\n# Create the basic layout and config files\nmkdir -p venv/etc/salt/pki/{master,minion} \\\n      venv/etc/salt/autosign_grains \\\n      venv/var/cache/salt/master/file_lists/roots\n\ncat \u003c\u003cEOF \u003e venv/etc/salt/master\nroot_dir: $(pwd)/venv\n\nfile_roots:\n  base:\n    - $(pwd)/srv/salt\n\npillar_roots:\n  base:\n    - $(pwd)/srv/pillar\nEOF\n```\n\n## Looking for the pillar\n\nSalt pillar are the data that the Salt states use to decide the\nactions that needs to be done. For example, in the case of Yomi the\ntypical data will be the layout of the hard disks, the software\npatterns that will be installed, or the users that will be\ncreated. For a complete explanation of the pillar required by Yomi,\ncheck the section [Pillar in Yomi](#pillar-in-yomi)\n\nBy default Salt will search the states in `/srv/salt`, and the pillar\nin `/srv/pillar`, as established by `file_roots` and `pillar_roots`\nparameters in the default configuration file (`/etc/salt/master`).\n\nTo indicate a different place where to find the pillar, we can add a\nnew snippet in the `/etc/salt/master.d` directory:\n\n```bash\ncat \u003c\u003cEOF \u003e /etc/salt/master.d/pillar.conf\npillar_roots:\n  base:\n    - /srv/pillar\n\t- /usr/share/yomi/pillar\nEOF\n```\n\nThe `yomi-formula` package already contains an example of such\nconfiguration. Check section [Looking for the pillar in\nYomi](#looking-for-the-pillar-in-yomi)\n\n## Enabling auto-sign\n\nTo simplify the discovery and key management of the minions, we can\nuse the auto-sign feature of Salt. To do that we need to add a new\nfile in `/etc/salt/master.d`.\n\n```bash\necho \"autosign_grains_dir: /etc/salt/autosign_grains\" \u003e \\\n     /etc/salt/master.d/autosign.conf\n```\n\nThe Yomi ISO image available in Factory already export some UUIDs\ngenerated for each minion, so we need to list into the master all the\npossible valid UUIDs.\n\n```bash\nmkdir -p /etc/salt/autosign_grains\n\nfor i in $(seq 0 9); do\n  echo $(uuidgen --md5 --namespace @dns --name http://opensuse.org/$i)\ndone \u003e /etc/salt/autosign_grains/uuid\n```\n\nThe `yomi-formula` package already contains an example of such\nconfiguration. Check section [Enabling auto-sing in\nYomi](#enabling-auto-sing-in-yomi)\n\n## Salt API\n\nThe `salt-master` service can be accessed via a REST API, provided by\nan external tool that needs to be enabled.\n\n```bash\nzypper in salt-api\n\nsystemctl enable --now salt-api.service\n```\n\nThere are different options to configure the `salt-api` service, but\nis safe to choose `CherryPy` as a back-end to serve the requests of\nSalt API.\n\nWe need to configure this service to listen to one port, for example\n8000, and to associate an authorization mechanism. Read the Salt\ndocumentation about this topic for different options.\n\n```bash\ncat \u003c\u003cEOF \u003e /etc/salt/master.d/salt-api.conf\nrest_cherrypy:\n  port: 8000\n  debug: no\n  disable_ssl: yes\nEOF\n\ncat \u003c\u003cEOF \u003e /etc/salt/master.d/eauth.conf\nexternal_auth:\n  file:\n    ^filename: /etc/salt/user-list.txt\n    salt:\n      - .*\n      - '@wheel'\n      - '@runner'\n      - '@jobs'\nEOF\n\necho \"salt:linux\" \u003e /etc/salt/user-list.txt\n```\n\nThe `yomi-formula` package already contains an example of such\nconfiguration. Check section [Salt API in Yomi](#salt-api-in-yomi)\n\n\n# The Yomi formula\n\nThe states and modules required by Salt to drive an installation can\nbe installed where the `salt-master` resides:\n\n```bash\nzypper in yomi-formula\n```\n\nThis package will install the states in\n`/usr/share/salt-formulas/states`, some pillar examples in\n`/usr/share/yomi/pillar` and configuration files in `/usr/share/yomi`.\n\n## Looking for the pillar in Yomi\n\nYomi expect from the pillar to be a normal YAML document, optionally\ngenerated by a Jinja template, as is usual in Salt.\n\nThe schema of the pillar is described in the section [Pillar reference\nfor Yomi](#pillar-reference-for-yomi), but the `yomi-formula` package\nprovides a set of examples that can be used to deploy MicroOS\ninstallations, Kubic, LVM, RAID or simple openSUSE Tumbleweed ones.\n\nIn order to `salt-master` can find the pillar, we need to change the\n`pillar_roots` entry in the configuration file, or use the one\nprovided by the package:\n\n```bash\ncp -a /usr/share/yomi/pillar.conf /etc/salt/master.d/\nsystemctl restart salt-master.service\n```\n\n## Enabling auto-sign in Yomi\n\nThe images generated by the Open Build Service that are ready to be\nused together with Yomi contains a list a random UUID, that can be\nused as a auto-sign grain in `salt-master`.\n\nWe can enable this feature adding the configuration file provided by\nthe package:\n\n```bash\ncp /usr/share/yomi/autosign.conf /etc/salt/master.d/\nsystemctl restart salt-master.service\n```\n\n## Salt API in Yomi\n\nAs described in the section [Salt API](#salt-api), we need to enable\nthe `salt-api` service in order to provide a REST API service to\n`salt-minion`.\n\nThis service is used by Yomi to monitor the installation, reading the\nevent bus of Salt. To enable the real-time events we need to enable\nset `events` field to `yes` in the configuration section of the\npillar.\n\nWe can enable this service easily (after installing the `salt-api`\npackage and the dependencies) using the provided configuration file:\n\n```bash\ncp /usr/share/yomi/salt-api.conf /etc/salt/master.d/\nsystemctl restart salt-master.service\n```\n\nFeel free to edit `/etc/salt/master.d/salt-api.conf` and provide the\nrequired certificates to enable the SSL connection, an use a different\nauthorization mechanism. The current one is based on reading the file\n`/usr/share/yomi/user-list.txt`, that is storing the password in plain\ntext. So please, *do not* use this in production.\n\n### Real time monitoring in Yomi\n\nOnce we check that in our `config` of the pillar contains this:\n\n```yaml\nconfig:\n  events: yes\n```\n\nWe can launch the `yomi-monitor` tool.\n\n```bash\nexport SALTAPI_URL=http://localhost:8000\nexport SALTAPI_EAUTH=file\nexport SALTAPI_USER=salt\nexport SALTAPI_PASS=linux\n\nyomi-monitor -r -y\n```\n\nThe `yomi-monitor` tool store in a local cache the authentication\ntokens generated by Salt API. This will accelerate the next connection\nto the service, but sometimes can cause authentication errors (for\nexample, when the cache is in place but the salt-master get\nreinstalled). The option `-r` makes sure that this cache is removed\nbefore connection. Check the help option of the tool for more\ninformation.\n\n\n# Booting a new machine\n\nAs described in the previous sections, Yomi is a set of Salt states\nthat are used to drive the installation of a new operating system. To\ntake full control of the system where the installation will be done,\nyou will need to boot from an external system that provides an already\nconfigured `salt-minion`, and a set of CLI tools required during the\ninstallation.\n\nWe can deploy all the requirements using different mechanisms. One,\nfor example, is via PXE boot. We can build a server that will deliver\nthe Linux `kernel` and an `initrd` with all the required\nsoftware. Another alternative is to have an already live ISO image\nthat you use to boot from the USB port.\n\nThere is an already available image that contains all the requirements\nin\n[Factory](https://build.opensuse.org/package/show/openSUSE:Factory/openSUSE-Tumbleweed-Yomi). This\nis an image build from openSUSE Tumbleweed repositories that includes\na very minimal set of tools, including the openSUSE version of\n`salt-minion`.\n\nTo use the last version of the image, together with the last version\nof `salt-minion` that includes all the patches that are under review\nin the SaltStack project, you can always use the version from the\n[devel\nproject](https://build.opensuse.org/package/show/systemsmanagement:yomi/openSUSE-Tumbleweed-Yomi)\n\nNote that this image is a `_multibuild` one, and generates two\ndifferent images. One is a LiveCD ISO image, ready to be booted from\nUSB or DVD, and the other one is a PXE Boot ready image.\n\n## The ISO image\n\nThe ISO image is a LiveCD that can be booted from USB or from DVD, and\nthe last version can be always be downloaded from:\n\n```bash\nwget https://download.opensuse.org/repositories/systemsmanagement:/yomi/images/iso/openSUSE-Tumbleweed-Yomi.x86_64-livecd.iso\n```\n\nThis image does not have a root password, so if we have physical access to\nthe node we can become root locally.  The `sshd` service is enabled\nduring boot time but for security reasons the user `root` cannot\naccess via SSH (`PermitEmptyPasswords` is not set).  To gain remote\naccess to `root` we need to set the kernel command line parameter\n`ym.sshd=1` (for example, via PXE Boot).\n\n## PXE Boot\n\nThe second image available is a OEM ramdisk that can be booted from\nPXE Boot.\n\nTo install the image we first need to download the file\n`openSUSE-Tumbleweed-Yomi.x86_64-${VERSION}-pxeboot-Build${RELEASE}.${BUILD}.install.tar`\nfrom the Factory, or directly from the development project.\n\nWe need to start the `sftpd` service or use `dnsmasq` to behave also\nas a tftp server. There is some documentation in the [openSUSE\nwiki](https://en.opensuse.org/SDB:PXE_boot_installation), and if you\nare using QEMU you can also check the appendix document.\n\n```bash\nmkdir -p /srv/tftpboot/pxelinux.cfg\ncp /usr/share/syslinux/pxelinux.0 /srv/tftpboot\n\ncd /srv/tftpboot\ntar -xvf $IMAGE\n\ncat \u003c\u003cEOF \u003e /srv/tftpboot/pxelinux.cfg/default\ndefault yomi\nprompt   1\ntimeout  30\n\nlabel yomi\n  kernel pxeboot.kernel\n  append initrd=pxeboot.initrd.xz rd.kiwi.install.pxe rd.kiwi.install.image=tftp://${SERVER}/openSUSE-Tumbleweed-Yomi.xz rd.kiwi.ramdisk ramdisk_size=1048576\nEOF\n```\n\nThis image is based on Tumbleweed, that leverage by default the\npredictable network interface name.  If your image is based on a\ndifferent one, be sure to add `net.ifnames=1` at the end of the\n`append` section.\n\n## Finding the master node\n\nThe `salt-minion` configuration in the Yomi image will search the\n`salt-master` system under the `salt` name. Is expected that the local\nDNS service will resolve the `salt` name to the correct IP address.\n\nDuring boot time of the Yomi image we can change the address where is\nexpected to find the master node. To do that we can enter under the\nGRUB menu the entry `ym.master=my_master_address`. For example\n`ym.master=10.0.2.2` will make the minion to search the master at the\naddress `10.0.2.2`.\n\nAn internal systemd service in the image will detect this address and\nconfigure the `salt-minion` accordingly.\n\nUnder the current Yomi states, this address will be copied under the\nnew installed system, together with the key delivered by the\n`salt-master` service. This means that once the system is fully\ninstalled with the new operating system, the new `salt-minion` will\nfind the master directly after the first boot.\n\n## Setting the minion ID\n\nIn a similar way, during the boot process we can set the minion ID\nthat will be assigned to the `salt-minion`. Using the parameter\n`ym.minion_id`. For example, `ym.minion_id=worker01` will set the\nminion ID for this system as `worker01`.\n\nThe rules for the minion ID are a bit more complicated. Salt, by\ndefault, set the minion ID equal to the FQDN or the IP of the node if\nno ID is specified. This cannot be a good idea if the IP changes, so\nthe current rules are:\n\n* The value from `ym.minion_id` boot parameter.\n* The FQDN hostname of the system, if is different from localhost.\n* The MAC address of the first interface of the system.\n\n## Adding user provided configuration\n\nSometimes we need to inject some extra configuration into `salt-minion` \nbefore the service runs. For example, we might need to add some grains,\nor enable some feature in the `salt-minion` service running inside the\nimage.\n\nTo do that we have two options: we can pass an URL with the content, or\nwe can add the full content as a parameter during the boot process.\n\nTo pass an URL we should use `ym.config_url` parameter. For example,\n`ym.config_url=http://server.com/pub/myconfig.cfg` will download the\nconfiguration file, and will store it under the default name\n`config_url.cfg` in `/etc/salt/minion.d`. We can set a different name\nfrom the default via the parameter `ym.config_url_name`.\n\nIn a similar way we can use the parameter `ym.config` to declare the\nfull content of the user provided configuration file. You need to use\nquotes to mark the string and escaped control codes to indicate new\nlines or tabs, like `ym.config=\"grains:\\n my_grain: my_value\"`. This\nwill create a file named `config.cfg`, and the name can be overwritten\nwith the parameter `ym.config_name`.\n\n## Container\n\nBecause the versatility of Salt, it's possible to execute the modules\nthat belong to the `salt-minion` service Yomi without the requirement\nof any `salt-master` nor `salt-minion` service running. We could\nlaunch the installation via only the `salt-call` command in local\nmode.\n\nBecause of that, it is possible to deliver Yomi as a single container,\ncomposed of the different Salt and Yomi modules and states.\n\nWe can boot a machine using any mechanism, like a recovery image, and\nuse `podman` to register the Yomi container. This container will be\nexecuted as a privileged one, mapping the external devices inside the\ncontainer space.\n\nTo register the container we can do:\n\n```bash\npodman pull registry.opensuse.org/systemsmanagement/yomi/images/opensuse/yomi:latest\n```\n\nIs recommended to create a local pillar directory;\n\n```bash\nmkdir pillar\n```\n\nOnce we have the pillar data, we can launch the installer:\n\n```bash\npodman run --privileged --rm \\\n  -v /dev:/dev \\\n  -v /run/udev:/run/udev \\\n  -v ./pillar:/srv/pillar \\\n  \u003cCONTAINER_ID\u003e \\\n  salt-call --local state.highstate\n```\n\n\n# Basic operations\n\nOnce `salt-master` is configured and running, the `yomi-formula`\nstates are available and a new system is booted with a up-to-date\n`salt-minion`, we can start to operate with Yomi.\n\nThe usual process is simple: describe the pillar information and apply\nthe `yomi` state to the node or nodes. Is not relevant how the pillar\nwas designed (maybe using a smart template that cover all the cases or\nwriting a raw YAML that only covers one single installation).  In this\nsection we will provide some hints about how get information and can\nhelp in this process.\n\n## Getting hardware information\n\nThe provided pillar are only an example of what we can do with\nYomi. Eventually we need to adapt them based on the hardware that we\nhave.\n\nWe can discover the hardware configuration with different\nmechanism. One is get the `grains` information directly from the\nminion:\n\n```bash\nsalt node grains.items\n```\n\nWe can get more detailed information using other Salt modules, like\n`partition.list`, `network.interfaces` or `udev.info`.\n\nWith Yomi we provided a simple interface to `hwinfo` that provides\nsome of the information that is required to make decisions about the\npillar in a single report:\n\n```bash\n# Synchronize all the modules to the minion\nsalt node saltutil.sync_all\n\n# Get a short report about some devices\nsalt node devices.hwinfo\n\n# Get a detailled report about some devices\nsalt node devices.hwinfo short=no\n```\n\n## Configuring the pillar\n\nThe package `yomi-formula` provides some pillar examples that can be\nused as a reference when you are creating your own profiles.\n\nSalt search the pillar information in the directories listed in the\n`pillar_roots` configuration entry, and using the snippet from the\nsection [Pillar in Yomi](#pillar-in-yomi), we can make those examples\navailable in our system.\n\nIn the case that we want to edit those files, we can copy them in a\ndifferent directory and add it to the `pillar_roots` entry.\n\n```bash\nmkdir -p /srv/pillar-yomi\ncp -a /usr/share/yomi/pillar/* /srv/pillar-yomi\n\ncat \u003c\u003cEOF \u003e /etc/salt/master.d/pillar.conf\npillar_roots:\n  base:\n    - /srv/pillar-yomi\n    - /srv/pillar\nEOF\nsystemctl restart salt-master.service\n```\n\nThe pillar tree start with the `top.sls` file (there is another\n`top.sls` file for the states, do not confuse them).\n\n```yaml\nbase:\n  '*':\n    - installer\n```\n\nThis file is used to map the node with the data that the states will\nuse later. For this example the file that contain the data is\n`installer.sls`, but feel free to choose a different name when you are\ncreating your own pillar.\n\nThis `installer.sls` is used as an entry point for the rest of the\ndata. Inside the file there is some Jinja templates that can be edited\nto define different kinds of installations. This feature is leveraged\nby the\n[openQA](https://github.com/os-autoinst/os-autoinst-distri-opensuse/tree/master/tests/yomi)\ntests, to easily make multiple deployments.\n\nYou can edit the `{% set VAR=VAL %}` section to adjust it to your\ncurrent profile, or create one from scratch. The files\n`_storage.sls.*` are included for different scenarios, and this is the\nplace where the disk layout is described. Feel free to include it\ndirectly on your pillar, or use a different mechanism to decide the\nlayout.\n\n## Cleaning the disks\n\nYomi tries to be careful with the current data stored on the disks. By\ndefault, it will not remove any partition nor will make an implicit\ndecision about the device where the installation will run.\n\nIf we want to remove the data from the device, we can use the provided\n`devices.wipe` execution module.\n\n```bash\n# List the partitions\nsalt node partition.list /dev/sda\n\n# Make sure that the new modules are in the minion\nsalt node saltutil.sync_all\n\n# Remove all the partitions and the filesystem information\nsalt node devices.wipe /dev/sda\n```\n\nTo wipe all the devices defined in the pillar at once, we can apply\nthe `yomi.storage.wipe` state.\n\n```bash\n# Make sure that the new modules are in the minion\nsalt node saltutil.sync_all\n\n# Remove all the partitions and the filesystem information\nsalt node state.apply yomi.storage.wipe\n```\n\n## Applying the yomi state\n\nFinally, to install the operating system defined by the pillar into\nthe new node, we need to apply the high-state:\n\n```bash\nsalt node state.apply yomi\n```\n\nIf we have a `top.sls` file similar to this example, living in\n`/srv/salt` or in any other place where `file_roots` option is\nconfigured:\n\n```yaml\nbase:\n  '*':\n    - yomi\n```\n\nWe can apply directly the high state:\n\n```bash\nsalt node state.highstate\n```\n\n# Pillar reference for Yomi\n\nTo install a new node, we need to provide some data to describe the\ninstallation requirements, like the layout of the partitions, file\nsystems used, or what software to install inside the new\ndeployment. This data is collected in what is Salt is known as a\n[pillar](https://docs.saltstack.com/en/latest/topics/tutorials/pillar.html).\n\nTo configure the `salt-master` service to find the pillar, check the\nsection [Looking for the pillar](#looking-for-the-pillar).\n\nPillar can be associated with certain nodes in our network, making of\nthis technique a basic one to map a description of how and what to\ninstall into a node. This mapping is done via the `top.sls` file:\n\n```yaml\nbase:\n  'C7:7E:55:62:83:17':\n    - installer\n```\n\nIn `installer.sls` we will describe in detail the installation\nparameters that will be applied to the node which minion-id match with\n`C7:7E:55:62:83:17`. Note that in this example we are using the MAC\naddress of the first interface as a minion-id (check the section\n**Enabling Autosign** for an example).\n\nThe `installer.sls` pillar consist on several sections, that we can\ndescribe here.\n\n## `config` section\n\nThe `config` section contains global configuration options that will\naffect the installer.\n\n* `events`: Boolean. Optional. Default: `yes`\n\n  Yomi can fire Salt events before and after the execution of the\n  internal states that Yomi use to drive the installation. Using the\n  Salt API, WebSockets, or any other mechanism provided by Salt, we\n  can listen the event bus and use this information to monitor the\n  installer. Yomi provides a basic tool, `yomi-monitor`, that shows\n  real time information about the installation process.\n\n  To disable the events, set this parameter to `no`.\n\n  Note that this option will add three new states for each single Yomi\n  state. One extra state is executed always before the normal state,\n  and is used to signalize that a new state will be executed. If the\n  state is successfully terminated, a second extra state will send an\n  event to signalize that the status of the state is positive. But if\n  the state fails, a third state will send the fail signal. All those\n  extra states will be showed in the final report of Salt.\n\n* `reboot`: String. Optional. Default: `yes`\n\n  Control the way that the node will reboot. There are three possible\n  values:\n\n  * `yes`: Will produce a full reboot cycle. This value can be\n    specified as the \"yes\" string, or the `True` boolean value.\n\n  * `no`: Will no reboot after the installation.\n\n  * `kexec`: Instead of rebooting, reload the new kernel installed in\n    the node.\n\n  * `halt`: The machine will halt at the end of the installation.\n\n  * `shutdown`: The machine will shut down at the end of the\n    installation.\n\n* `snapper`: Boolean. Optional. Default: `no`\n\n  In Btrfs configurations (and in LVM, but still not implemented) we\n  can install the snapper tool, to do automatic snapshots before and\n  after updates in the system. One installed, a first snapshot will be\n  done and the GRUB entry to boot from snapshots will be added.\n\n* `locale`: String. Optional. Default: `en_US.utf8`\n\n  Sets the system locale, more specifically the LANG= and LC\\_MESSAGES\n  settings. The argument should be a valid locale identifier, such as\n  `de_DE.UTF-8`. This controls the locale.conf configuration file.\n\n* `locale_message`: String. Optional.\n\n  Sets the system locale, more specifically the LANG= and LC\\_MESSAGES\n  settings. The argument should be a valid locale identifier, such as\n  `de_DE.UTF-8`. This controls the locale.conf configuration file.\n\n* `keymap`: String. Optional. Default: `us`\n\n  Sets the system keyboard layout. The argument should be a valid\n  keyboard map, such as `de-latin1`. This controls the \"KEYMAP\" entry\n  in the vconsole.conf configuration file.\n\n* `timezone`: String. Optional. Default: `UTC`\n\n  Sets the system time zone. The argument should be a valid time zone\n  identifier, such as \"Europe/Berlin\". This controls the localtime\n  symlink.\n\n* `hostname`: String. Optional.\n\n  Sets the system hostname. The argument should be a host name,\n  compatible with DNS. This controls the hostname configuration file.\n\n* `machine_id`: String. Optional.\n\n  Sets the system's machine ID. This controls the machine-id file. If\n  no one is provided, the one from the current system will be re-used.\n\n* `target`: String. Optional. Default: `multi-user.target`\n\n  Set the default target used for the boot process.\n\nExample:\n\n```yaml\nconfig:\n  # Do not send events, useful for debugging\n  events: no\n  # Do not reboot after installation\n  reboot: no\n  # Always install snapper if possible\n  snapper: yes\n  # Set language to English / US\n  locale: en_US.UTF-8\n  # Japanese keyboard\n  keymap: jp\n  # Universal Timezone\n  timezone: UTC\n  # Boot in graphical mode\n  target: graphical.target\n```\n\n## `partitions` section\n\nYomi separate partitioning the devices from providing a file system,\ncreating volumes or building arrays of disks. The advantage of this is\nthat this, usually, compose better that other approaches, and makes\nmore easy adding more options that needs to work correctly with the\nrest of the system.\n\n* `config`: Dictionary. Optional.\n\n  Subsection that store some configuration options related with the\n  partitioner.\n\n  * `label`: String. Optional. Default: `msdos`\n\n    Default label for the partitions of the devices. We use any\n    `parted` partition recognized by `mklabel`, like `gpt`, `msdos` or\n    `bsd`. For UEFI systems, we need to set it to `gpt`. This value\n    will be used for all the devices if is not overwritten.\n\n  * `initial_gap`: Integer. Optional. Default: `0`\n\n    Initial gap (empty space) leaved before the first\n    partition. Usually is recommended to be 1MB, so GRUB have room to\n    write the code needed after the MBR, and the sectors are aligned\n    for multiple SSD and hard disk devices. Also is relevant for the\n    sector alignment in devices. The valid units are the same for\n    `parted`. This value will be used for all the devices if is not\n    overwritten.\n\n* `devices`: Dictionary.\n\n  List of devices that will be partitioned. We can indicate already\n  present devices, like `/dev/sda` or `/dev/hda`, but we can also\n  indicate devices that will be present after the RAID configuration,\n  like `/dev/md0` or `/dev/md/myraid`. We can use any valid device\n  name in Linux such as all the `/dev/disk/by-id/...`,\n  `/dev/disk/by-label/...`, `/dev/disk/by-uuid/...` and others.\n\n  For each device we have:\n\n  * `label`: String. Optional. Default: `msdos`\n\n    Partition label for the device. The meaning and the possible\n    values are identical for `label` in the `config` section.\n\n  * `initial_gap`: Integer. Optional. Default: `0`\n\n    Initial gap (empty space) leave before the first partition for\n    this device.\n\n  * `partitions`: Array. Optional.\n\n    Partitions inside a device are described with an array. Each\n    element of the array is a dictionary that describe a single\n    partition.\n\n    * `number`: Integer. Optional. Default: `loop.index`\n\n      Expected partition number. Eventually this parameter will be\n      really optional, when the partitioner can deduce it from other\n      parameters. Today is better to be explicit in the partition\n      number, as this will guarantee that the partition is found in\n      the hard disk if present. If is not set, number will be the\n      current index position in the array.\n\n    * `id`: String. Optional.\n\n      Full name of the partition. For example, valid ids can be\n      `/dev/sda1`, `/dev/md0p1`, etc. Is optional, as the name can be\n      deduced from `number`.\n\n    * `size`: Float or String.\n\n      Size of the partition expressed in `parted` units. All the units\n      needs to match for partitions on the same device. For example,\n      if `initial_gap` or the first partition is expressed in MB, all\n      the sized needs to be expressed in MB too.\n\n      The last partition can use the string `rest` to indicate that\n      this partition will use all the free space available. If after\n      this another partition is defined, Yomi will show a validation\n      error.\n\n    * `type`: String.\n\n      A string that indicate for what this partition will be\n      used. Yomi recognize several types:\n\n      * `swap`: This partition will be used for SWAP.\n      * `linux`: Partition used to root, home or any data.\n      * `boot`: Small partition used for GRUB when in BIOS and `gpt`.\n      * `efi`: EFI partition used by GRUB when UEFI.\n      * `lvm`: Partition used to build an LVM physical volume.\n      * `raid`: Partition that will be a component of an array.\n\nExample:\n\n```yaml\npartitions:\n  config:\n    label: gpt\n    initial_gap: 1MB\n  devices:\n    /dev/sda:\n      partitions:\n        - number: 1\n          size: 256MB\n          type: efi\n        - number: 2\n          size: 1024MB\n          type: swap\n        - number: 3\n          size: rest\n          type: linux\n```\n\n## `lvm` section\n\nTo build an LVM we usually create some partitions (in the `partitions`\nsection) with the `lvm` type set, and in the `lvm` section we describe\nthe details. This section is a dictionary, were each key is the name\nof the LVM volume, and inside it we can find:\n\n* `devices`: Array.\n\n  List of components (partitions or full devices) that will constitute\n  the physical volumes and the virtual group of the LVM. If the\n  element of the array is a string, this will be the name of a device\n  (or partition) that belongs to the physical group. If the element is\n  a dictionary it will contains:\n\n  * `name`: String.\n\n    Name of the device or partition.\n\n  The rest of the elements of the dictionary will be passed to the\n  `pvcreate` command.\n\n  Note that the name of the virtual group will be the key where this\n  definition is under.\n\n* `volumes`: Array.\n\n  Each element of the array will define:\n\n  * `name`: String.\n\n    Name of the logical volume under the volume group.\n\n  The rest of the elements of the dictionary will be passed to the\n  `lvcreate` command. For example, `size` and `extents` are used to\n  indicate the size of the volume, and they can include a suffix to\n  indicate the units. Those units will be the same used for\n  `lvcreate`.\n\nThe rest of the elements of this section will be passed to the\n`vgcreate` command.\n\nExample:\n\n```yaml\nlvm:\n  system:\n    devices:\n      - /dev/sda1\n      - /dev/sdb1\n      - name: /dev/sdc1\n        dataalignmentoffset: 7s\n    clustered: 'n'\n    volumes:\n      - name: swap\n        size: 1024M\n      - name: root\n        size: 16384M\n      - name: home\n        extents: 100%FREE\n```\n\n## `raid` section\n\nIn the same way that LVM, to create RAID arrays we can setup first\npartitions (with the type `raid`) and configure the details in this\nsection. Also, similar to the LVM section, the keys a correspond to\nthe name of the device where the RAID will be created. Valid values\nare like `/dev/md0` or `/dev/md/system`.\n\n* `level`: String.\n\n   RAID level. Valid values can be `linear`, `raid0`, `0`, `stripe`,\n   `raid1`, `1`, `mirror`, `raid4`, `4`, `raid5`, `5`, `raid6`, `6`,\n   `raid10`, `10`, `multipath`, `mp`, `faulty`, `container`.\n\n* `devices`: Array.\n\n  List of devices or partitions that build the array.\n\n* `metadata`: String. Optional. Default: `default`\n\n  Metadata version for the superblock. Valid values are `0`, `0.9`,\n  `1`, `1.0`, `1.1`, `1.2`, `default`, `ddm`, `imsm`.\n\nThe user can specify more parameters that will be passed directly to\n`mdadm`, like `spare-devices` to indicate the number of extra devices\nin the initial array, or `chunk` to speficy the chunk size.\n\nExample:\n\n```yaml\nraid:\n  /dev/md0:\n    level: 1\n    devices:\n      - /dev/sda1\n      - /dev/sdb1\n      - /dev/sdc1\n    spare-devices: 1\n    metadata: 1.0\n```\n\n## `filesystems` section\n\nThe partitions, devices or arrays created in previous sections usually\nrequires a file system. This section will simply list the device name\nand the file system (and properties) that will be applied to it.\n\n* `filesystem`. String.\n\n  File system to apply in the device. Valid values are `swap`,\n  `linux-swap`, `bfs`, `btrfs`, `xfs`, `cramfs`, `ext2`, `ext3`,\n  `ext4`, `minix`, `msdos`, `vfat`. Technically Salt will search for a\n  command that match `mkfs.\u003cfilesystem\u003e`, so the valid options can be\n  more extensive that the one listed here.\n\n* `mountpoint`. String.\n\n  Mount point where the device will be registered in `fstab`.\n\n* `fat`. Integer. Optional.\n\n  If the file system is `vfat` we can force the FAT size, like 12, 16\n  or 32.\n\n* `subvolumes`. Dictionary.\n\n  For `btrfs` file systems we can specify more details.\n\n  * `prefix`. String. Optional.\n\n    `btrfs` sub-volume name where the rest of the sub-volumes will be\n    under. For example, if we set `prefix` as `@` and we create a\n    sub-volume named `var`, Yomi will create it as `@/var`.\n\n  * `subvolume`. Dictionary.\n\n    * `path`. String.\n\n      Path name for the sub-volume.\n\n\t* `copy_on_write`. Boolean. Optional. Default: `yes`\n\n      Value for the copy-on-write option in `btrfs`.\n\nExample:\n\n```yaml\nfilesystems:\n  /dev/sda1:\n    filesystem: vfat\n    mountpoint: /boot/efi\n    fat: 32\n  /dev/sda2:\n    filesystem: swap\n  /dev/sda3:\n    filesystem: btrfs\n    mountpoint: /\n    subvolumes:\n      prefix: '@'\n      subvolume:\n        - path: home\n        - path: opt\n        - path: root\n        - path: srv\n        - path: tmp\n        - path: usr/local\n        - path: var\n          copy_on_write: no\n        - path: boot/grub2/i386-pc\n        - path: boot/grub2/x86_64-efi\n```\n\n## `bootloader` section\n\n* `device`: String.\n\n  Device name where GRUB2 will be installed. Yomi will take care of\n  detecting if is a BIOS or an UEFI setup, and also if Secure-Boot in\n  activated, to install and configure the bootloader (or the shim\n  loader)\n\n* `timeout`: Integer. Optional. Default: `8`\n\n  Value for the `GRUB_TIMEOUT` parameter.\n\n* `kernel`: String. Optional. Default: `splash=silent quiet`\n\n  Line assigned to the `GRUB_CMDLINE_LINUX_DEFAULT` parameter.\n\n* `terminal`: String. Optional. Default: `gfxterm`\n\n  Value for the `GRUB_TERMINAL` parameter.\n\n  If the value is set to `serial`, we need to add content to the\n  `serial_command` parameter.\n\n  If the value is set to `console`, we can pass the console parameters\n  to the `kernel` parameter. For example, `kernel: splash=silent quiet\n  console=tty0 console=ttyS0,115200`\n\n* `serial_command`: String. Optional\n\n  Value for the `GRUB_SERIAL_COMMAND` parameter. If there is a value,\n  `GRUB_TERMINAL` is expected to be `serial`.\n\n* `gfxmode`: String. Optional. Default: `auto`\n\n  Value for the `GRUB_GFXMODE` parameter.\n\n* `theme`: Boolean. Optional. Default: `no`\n\n  If `yes` the `grub2-branding` package will be installed and\n  configured.\n\n* `disable_os_prober`: Boolean. Optional. Default: `False`\n\n  Value for the `GRUB_DISABLE_OS_PROBER` parameter.\n\nExample:\n\n```yaml\nbootloader:\n  device: /dev/sda\n```\n\n## `software` section\n\nWe can indicate the repositories that will be registered in the new\ninstallation, and the packages and patterns that will be installed.\n\n* `config`. Dictionary. Optional\n\n  Local configuration for the software section. Except `minimal`,\n  `transfer`, and `verify` all the options can be overwritten in each\n  repository definition.\n\n  * `minimal`: Boolean. Optional. Default: `no`\n\n    Configure zypper to make a minimal installation, excluding\n    recommended, documentation and multi-version packages.\n\n  * `transfer`: Boolean. Optional. Default: `no`\n\n    Transfer the current repositories (maybe defined in the media\n    installation) into the installed system. If marked, this step will\n    be done early, so any future action could update or replace one of\n    the repositories.\n\n  * `verify`: Boolean. Optional. Default: `yes`\n\n    Verify the package key when installing.\n\n  * `enabled`: Boolean. Optional. Default: `yes`\n\n    If the repository is enabled, packages can be installed from\n    there. A disabled repository will not be removed.\n\n  * `refresh`: Boolean. Optional. Default: `yes`\n\n    Enable auto-refresh of the repository.\n\n  * `gpgcheck`: Boolean. Optional. Default: `yes`\n\n    Enable or disable the GPG check for the repositories.\n\n  * `gpgautoimport`: Boolean. Optional. Default: `yes`\n\n    If enabled, automatically trust and import public GPG key for the\n    repository.\n\n  * `cache`: Boolean. Optional. Default: `no`\n\n    If the cache is enabled, will keep the RPM packages.\n\n* `repositories`. Dictionary. Optional\n\n  Each key of the dictionary will be the alias under where this\n  repository is registered, and the key, if is a string, the URL\n  associated with it.\n\n  If the key is an dictionary, we can overwrite some of the default\n  configuration options set in the `config` section, with the\n  exception of `minimal`. There are some more elements that we can set\n  for the repository:\n\n  * `url`: String.\n\n    URL of the repository.\n\n  * `name`: String. Optional\n\n    Descriptive name for the repository.\n\n  * `priority`: Integer. Optional. Default: `0`\n\n    Set priority of the repository.\n\n* `packages`. Array. Optional\n\n  List of packages or patters to be installed.\n\n* `image`. Dictionary. Optional\n\n  We can bootstrap the root file system based on a partition image\n  generate by KIWI (or any other mechanism), that will be copied into\n  the partition that have the root mount point assigned. This can be\n  used to speed the installation process.\n\n  Those images needs to contain only the file system and the data. If\n  the image contains a boot loader or partition information, the image\n  will fail during the resize operation. To validate if the image is\n  suitable, a simple `file image.raw` will do.\n\n  * `url`: String.\n\n    URL of the image. As internally we are using curl to fetch the\n    image, we can support multiple protocols like `http://`,\n    `https://` or `tftp://` among others. The image can be compressed,\n    and in that case one of those extensions must to be used to\n    indicate the format: [`gz`, `bz2`, `xz`]\n\n  * `md5`|`sha1`|`sha224`|`sha256`|`sha384`|`sha512`: String. Optional\n\n    Checksum type and value used to validate the image. If this field\n    is present but empty (only the checksum type, but with no value\n    attached), the state will try to fetch the checksum fail from the\n    same URL given in the previous field. If the path contains an\n    extension for a compression format, this will be replaced with the\n    checksum type as a new extension.\n\n    For example, if the URL is `http://example.com/image.xz`, the\n    checksum type is `md5`, and no value is provided, the checksum\n    will be expected at `http://example.com/image.md5`.\n\n    But if the URL is something like `http://example.com/image.ext4`,\n    the checksum will be expected in the URL\n    `http://example.com/image.ext4.md5`.\n\n  If the checksum type is provided, the value for the last image will\n  be stored in the Salt cache, and will be used to decide if the image\n  in the URL is different from the one already copied in the\n  partition. If this is the case, no image will be\n  downloaded. Otherwise a new image will be copied, and the old one\n  will be overwritten in the same partition.\n\nExample:\n\n```yaml\nsoftware:\n  repositories:\n    repo-oss: \"http://download.opensuse.org/tumbleweed/repo/oss\"\n    update:\n\t  url: http://download.opensuse.org/update/tumbleweed/\n\t  name: openSUSE Update\n  packages:\n    - patterns-base-base\n    - kernel-default\n```\n\n## `suseconnect` section\n\nVery related with the previous section (`software`), we can register\nan SLE product and modules using the `SUSEConnect` command.\n\nIn order to `SUSEConnect` to succeed, a product needs to be present\nalready in the system. This imply that the register must happen after\n(at least a partial) installation has been done.\n\nAs `SUSEConnect` will register new repositories, this also imply that\nnot all the packages that can be enumerated in the `software` section\ncan be installed.\n\nTo resolve both conflicts, Yomi will first install the packages listed\nin the `sofwtare` section, and after the registration, the packages\nlisted in this `suseconnect` section.\n\n* `config`. Dictionary.\n\n  Local configuration for the section. It is not optional as there is\n  at least one parameter that is required for any registration.\n\n  * `regcode`. String.\n\n  Subscription registration code for the product to be registered.\n\n  * `email`. String. Optional.\n\n  Email address for product registration.\n\n  * `url`. String. Optional.\n\n  URL of registration server (e.g. https://scc.suse.com)\n\n  * `version`. String. Optional.\n\n  Version part of the product name. If the product name do not have a\n  version, this default value will be used.\n\n  * `arch`. String. Optional.\n\n  Architecture part of the product name. If the product name do not\n  have an architecture, this default value will be used.\n\n* `products`. Array. Optional.\n\n  Product names to register. The expected format is\n  \u003cname\u003e/\u003cversion\u003e/\u003carchitecture\u003e. If only \u003cname\u003e is used, the values\n  for \u003cversion\u003e and \u003carchitecture\u003e will be taken from the `config`\n  section.\n\n  If the product / module have a different registration code than the\n  one declared in the `config` sub-section, we can declare a new one\n  via a dictionary.\n\n  * `name`. String. Optional.\n\n    Product names to register. The expected format is\n    \u003cname\u003e/\u003cversion\u003e/\u003carchitecture\u003e. If only \u003cname\u003e is used, the\n    values for \u003cversion\u003e and \u003carchitecture\u003e will be taken from the\n    `config` section.\n\n  * `regcode`. String. Optional.\n\n    Subscription registration code for the product to be registered.\n\n* `packages`. Array. Optional\n\n  List of packages or patters to be installed from the different\n  modules.\n\nExample:\n\n```yaml\nsuseconnect:\n  config:\n    regcode: SECRET-CODE\n  products:\n    - sle-module-basesystem/15.2/x86_64\n    - sle-module-server-applications/15.2/x86_64\n    - name: sle-module-live-patching/15.2/x86_64\n      regcode: SECRET-CODE\n```\n\n## `salt-minion` section\n\nInstall and configure the salt-minion service.\n\n* `config`. Boolean. Optional. Default: `no`\n\n  If `yes`, the configuration and cetificates of the new minion will\n  be the same that the current minion that is activated. This will\n  copy the minion configuration, certificates and grains, together\n  with the cached modules and states that are usually synchronized\n  before a highstate.\n\n  This option will be replaced in the future with more detailed ones.\n\nExample:\n\n```yaml\nsalt-minion:\n  config: yes\n```\n\n## `services` section\n\nWe can list the services that will be enabled or disabled during boot\ntime.\n\n* `enabled`. Array. Optional\n\n  List of services that will be enabled and started during the boot.\n\n* `disabled`. Array. Optional\n\n  List of services that will be exclicitly disabled during the boot.\n\nExample:\n\n```yaml\nservices:\n  enabled:\n    - salt-minion\n```\n\n## `networks` section\n\nWe can list the networks available in the target system. If the list\nis not provided, Yomi will try to deduce the network configuration\nbased on the current setup.\n\n* `interface`. String.\n\n  Name of the interface.\n\nExample:\n\n```yaml\nnetworks:\n  - interface: ens3\n```\n\n## `users` section\n\nIn this section we can list a simple list of users and passwords that\nwe expect to find once the system is booted.\n\n* `username`. String.\n\n  Login or username for the user.\n\n* `password`. String. Optional.\n\n  Shadow password hash for the user.\n\n* `certificates`. Array. Optional.\n\n  Certificates that will be added to .ssh/authorized_keys. Use only\n  the encoded key (remove the \"ssh-rsa\" prefix and the \"user@host\"\n  suffix).\n\nExample:\n\n```yaml\nusers:\n  - username: root\n    password: \"$1$wYJUgpM5$RXMMeASDc035eX.NbYWFl0\"\n  - username: aplanas\n    certificates:\n      - \"AAAAB3NzaC1yc2EAAAADAQABAAABAQDdP6oez825gnOLVZu70KqJXpqL4fGf\\\n        aFNk87GSk3xLRjixGtr013+hcN03ZRKU0/2S7J0T/dICc2dhG9xAqa/A31Qac\\\n        hQeg2RhPxM2SL+wgzx0geDmf6XDhhe8reos5jgzw6Pq59gyWfurlZaMEZAoOY\\\n        kfNb5OG4vQQN8Z7hldx+DBANPbylApurVz6h5vvRrkPfuRVN5ZxOkI+LeWhpo\\\n        vX5XK3eTjetAwWEro6AAXpGoQQQDjSOoYHCUmXzcZkmIWEubCZvAI4RZ+XCZs\\\n        +wTeO2RIRsunqP8J+XW4cZ28RZBc9K4I1BV8C6wBxN328LRQcilzw+Me+Lfre\\\n        eDPglqx\"\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopensuse%2Fyomi","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopensuse%2Fyomi","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopensuse%2Fyomi/lists"}