{"id":20510344,"url":"https://github.com/frobware/kvm-cluster-up","last_synced_at":"2025-04-13T22:33:36.027Z","repository":{"id":66961069,"uuid":"122258096","full_name":"frobware/kvm-cluster-up","owner":"frobware","description":"Scripts and utilities to install and manage KVM machines","archived":false,"fork":false,"pushed_at":"2023-01-25T10:24:15.000Z","size":61,"stargazers_count":13,"open_issues_count":0,"forks_count":3,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-04-10T11:14:18.869Z","etag":null,"topics":["cluster","kubernetes","kvm","openshift","scripts"],"latest_commit_sha":null,"homepage":null,"language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/frobware.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-02-20T21:23:24.000Z","updated_at":"2024-02-29T18:53:56.000Z","dependencies_parsed_at":"2023-07-31T13:17:14.558Z","dependency_job_id":null,"html_url":"https://github.com/frobware/kvm-cluster-up","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/frobware%2Fkvm-cluster-up","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/frobware%2Fkvm-cluster-up/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/frobware%2Fkvm-cluster-up/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/frobware%2Fkvm-cluster-up/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/frobware","download_url":"https://codeload.github.com/frobware/kvm-cluster-up/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248790737,"owners_count":21162082,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cluster","kubernetes","kvm","openshift","scripts"],"created_at":"2024-11-15T20:29:07.404Z","updated_at":"2025-04-13T22:33:36.010Z","avatar_url":"https://github.com/frobware.png","language":"Shell","readme":"# Introduction\n\nScripts to conveniently install and manage multiple KVM machines.\n\nDuring development or, in particular, when I'm trying to reproduce\nbugs I often find I need a group of machines that need to be quickly\nprovisioned and isolated from whatever I was currently doing. These\nmachines typically need identical configuration (i.e., ram \u0026 disk \u0026\nnetwork). I also require a naming pattern so that when I run `virsh\nlist` I can _actually_ recall why I spun them up in the first place.\n\nThe scripts in this repository allow you to:\n\n- provision KVM-based machines, based on a profile\n- manage those machines (reboot, shutdown, start)\n- take snapshots\n- delete snapshots\n- revert to a particular snapshot\n- upload images to the KVM storage pool\n\n# Profiles\n\nTo manage disparate configurations we have the notion of a *profile*.\nA profile is a just a file with per-profile properties.\n\n## Example\n\n\t$ cat kup-centos7\n\texport KUP_PREFIX=centos7\n\texport KUP_NETWORK=k8s\n\texport KUP_DOMAINNAME=k8s.home\n\t# https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2\n\texport KUP_CLOUD_IMG=CentOS-7-x86_64-GenericCloud.qcow2\n\texport KUP_OS_VARIANT=rhel7.4\n\texport KUP_CLOUD_USERNAME=centos\n\nTaking these environment variables in turn we have:\n\n- `KUP_PREFIX` - the prefix for machine names; machines will be\n  provisioned as `$KUP_PREFIX-vm-$N`.\n- `KUP_NETWORK` - the libvirt network; this needs to be provisioned\n  ahead of time and it also needs to be started/running\n- `KUP_DOMAINNAME` the domain the KVM machine resides in. This is\n  passed as meta-data to cloud-init.\n- `KUP_CLOUD_IMG` - the image to clone for the new machine\n- `KUP_CLOUD_USERNAME` - the cloud-image user name\n- `KUP_OS_VARIANT` - optional; helper for libvirt\n\n### RHEL 7.4 Example\n\n\t$ cat kup-rhel74\n\texport KUP_PREFIX=rhel74-dev\n\texport KUP_NETWORK=k8s\n\texport KUP_DOMAINNAME=k8s.home\n\texport KUP_CLOUD_IMG=rhel-server-7.4-x86_64-kvm.qcow2\n\texport KUP_OS_VARIANT=rhel7.4\n\texport KUP_CLOUD_USERNAME=cloud-user\n\n### Fedora 27 Example\n\n\t$ cat kup-fedora27\n\texport KUP_PREFIX=fedora27-dev\n\texport KUP_NETWORK=k8s\n\texport KUP_DOMAINNAME=k8s.home\n\t# https://download.fedoraproject.org/pub/fedora/linux/releases/27/CloudImages/x86_64/images/Fedora-Cloud-Base-27-1.6.x86_64.qcow2\n\texport KUP_CLOUD_IMG=Fedora-Cloud-Base-27-1.6.x86_64.qcow2\n\texport KUP_OS_VARIANT=fedora26\t# no variant in libvirt (ATM) for fedora27\n\texport KUP_CLOUD_USERNAME=fedora\n\n### Debian Example\n\n\t$ cat kup-debian9\n\texport KUP_PREFIX=debian9\n\texport KUP_NETWORK=k8s\n\texport KUP_DOMAINNAME=k8s.home\n\texport KUP_CLOUD_IMG=debian-9.3.5-20180213-openstack-amd64.qcow2\n\texport KUP_OS_VARIANT=linux\t# no variant in libvirt (ATM)\n\texport KUP_CLOUD_USERNAME=debian\n\n# Building a Cluster\n\nTo build a cluster based on a profile run:\n\n\t$ KUP_ENV=$HOME/kup-centos7 kup-domain-install 1\n\tadding pubkey from /home/aim/.ssh/id_rsa.pub\n\tadding user data from /usr/local/bin/../libexec/kvm-cluster-up/user-data\n\tgenerating configuration image at /tmp/tmp.6xVhuYVKeN/centos7-vm-1-ds.iso\n\tPool default refreshed\n\tVol centos7-vm-1-ds.iso created\n\tVol centos7-vm-1.qcow2 cloned from CentOS-7-x86_64-GenericCloud.qcow2\n\tSize of volume 'centos7-vm-1.qcow2' successfully changed to +50G\n\tPool default refreshed\n\tStarting install...\n\tDomain creation completed.\n\nThis provisions and boots asynchronously. The default action is to\nboot the machine, let cloud-init run, then power off. I chose to\npower-off as the default to facilitate snapshots. The provisioning\nstep dynamically creates a cloud-init data-store as an ISO and\nattaches that disk as `/dev/vdb`. That disk needs to be detached if\nyou want to use snapshots.\n\nBut one machine does not make a cluster! In general all the `kup-*`\nscripts take *instance-id* arguments, where an *instance-id* is just a\nunique symbol.\n\nTo provision multiple machines:\n\n\t$ KUP_ENV=$HOME/kup-centos7 kup-domain-install 1 2 3 4\n\n\t$ virsh list --all | grep centos7-vm\n\t13 centos7-vm-2 running\n\t14 centos7-vm-3 running\n\t15 centos7-vm-4 running\n\t16 centos7-vm-1 running\n\nTo provision more machines:\n\n\t$ KUP_ENV=$HOME/kup-centos7 kup-domain-install 80 90 100\n\nThe numbers do not need to be consecutive; they are just used to\nprovide unique names. In fact, they don't even need to be numbers.\n\n\t$ KUP_ENV=$HOME/kup-centos7 kup-domain-install master etcd node1 node2 node3\n\n\t$ virsh list --all\n\tId    Name                           State\n\t----------------------------------------------------\n\t-     centos7-vm-1                   shut off\n\t-     centos7-vm-2                   shut off\n\t-     centos7-vm-3                   shut off\n\t-     centos7-vm-4                   shut off\n\t-     centos7-vm-master              shut off\n\t-     centos7-vm-etcd                shut off\n\t-     centos7-vm-node1               shut off\n\t-     centos7-vm-node2               shut off\n\t-     centos7-vm-node3               shut off\n\n## Memorable Profiles\n\nSometimes I find I have a number of centos7 machines already running\nthat should not be perturbed but I need **moar** to investigate a\ndifferent bug so let's just create new instances...\n\n\t$ KUP_ENV=$HOME/kup-centos7 10 20 30\n\nBut relying on different sets of numbers can get confusing; it's\neasier to create another profile with a prefix that has more context:\n\n\t$ cat kup-centos7-bz18020\n\texport KUP_PREFIX=centos7-bz18020\n\texport KUP_NETWORK=k8s\n\texport KUP_DOMAINNAME=k8s.home\n\texport KUP_CLOUD_IMG=CentOS-7-x86_64-GenericCloud.qcow2\n\texport KUP_OS_VARIANT=rhel7.4\n\texport KUP_CLOUD_USERNAME=centos\n\n## Exporting KUP_ENV\n\n**Here be dragons**\n\nYou can export KUP_ENV which means you don't have to prefix any of the\n`kup-*\u003cprogram\u003e*` usage:\n\n\t$ export KUB_ENV=$HOME/kup-centos7\n\t# Boot a machine\n\t$ kup-domain-install 1\n\t# Do lots of work in another shell, lunch, ...\n\n\t# Come back from lunch...\n\t$ kup-domain-delete 1\n\t# **OOPS!** This wasn't the profile I thought I was using... Dang! IRL, way too often. :/\n\n# Accessing the machines\n\nAs I only use cloud-based images you need to use the correct\n*username* when accessing the machines. This is also why you need to\nspecify `KUP_CLOUD_USERNAME` in the profile. I wrap up access in my\n`$HOME/.ssh/config`:\n\n\n\tHost *\n\t\tGSSAPIAuthentication no\n\t\tCanonicalizeHostname yes\n\n\tHost centos7-vm-1 centos7-vm-2 centos7-vm-3 centos7-vm-4 centos7-vm-5 centos7-vm-6 centos7-vm-7 centos7-vm-8\n\t\tHostName %h.k8s.home\n\t\tUser centos\n\n\tHost rhel74-vm-1 rhel74-vm-2 rhel74-vm-3 rhel74-vm-4 rhel74-vm-5 rhel74-vm-6 rhel74-vm-7 rhel74-vm-8\n\t\tHostName %h.k8s.home\n\t\tUser cloud-user\n\n\tHost fedora27-vm-1 fedora27-vm-2 fedora27-vm-3 fedora27-vm-4 fedora27-vm-5 fedora27-vm-6 fedora27-vm-7 fedora27-vm-8\n\t\tHostName %h.k8s.home\n\t\tUser cloud-user\n\n\tHost *.k8s.home\n\t\tGSSAPIAuthentication no\n\t\tControlPersist 10m\n\t\tControlMaster auto\n\t\tControlPath /tmp/%r@%h:%p\n\t\tForwardAgent yes\n\t\tStrictHostKeyChecking no\n\t\tUserKnownHostsFile /dev/null\n\t\tHashKnownHosts no\n\t\tLogLevel QUIET\n\t\tCheckHostIP no\n\nNote: as these machines are on my LAN and tend to have a short-life,\nthe security setup is, well, super-lax! I could also use wildcards but\nthen tab completion wouldn't work.\n\n## Logging into the instance\n\nBased on the previous `.ssh/config` we can login straight in without\nhaving to worry about what the user name should be.\n\n\t$ ssh centos7-vm-1\n\t$ ssh rhel74-vm-2\n\t$ ssh fedora27-vm-3\n\nYou can also login via the console:\n\n\t$ virsh dominfo centos7-vm-1 | grep Id: | awk '{ print $2 }'\n\t# 19\n\t$ virsh console 19\n\tConnected to domain centos7-vm-1\n\tEscape character is ^]\n\n\tCentOS Linux 7 (Core)\n\tKernel 3.10.0-693.el7.x86_64 on an x86_64\n\tcentos7-vm-1 login: centos\n\tPassword: password\n\n\tLast login: Wed Feb 21 11:19:23 from 192.168.30.64\n\nI told you the security was super-lax!\n\n# Machine management\n\nThere are a number of scripts to aid in provisioning, starting up,\nshutting down and rebooting the machines in the cluster.\n\n- `kup-domain-delete`\n- `kup-domain-install`\n- `kup-domain-reboot`\n- `kup-domain-start`\n- `kup-domain-stop`\n- `kup-domain-ipaddr`\n\nEach script takes `\u003cinstance-id\u003e...` as [the only] arguments.\nHopefully these are all very obvious and orthogonal.\n\n\t$ kup-domain-delete 1 2 3 4\n\t$ kup-domain-delete node1 etcd\n\t$ kup-domain-install 1 2 3 4\n\t$ kup-domain-reboot 82 90 91\n\t$ kup-domain-reboot master\n\t$ kup-domain-start 1 2 3 4\n\t$ kup-domain-start master etcd node1 node2\n\t$ kup-domain-stop  master etcd node1 node2\n\t$ kup-domain-ipaddr master etcd node1 node2\n\n# Snapshots\n\nReinstalling is not for fun or profit! Well, not normally. To speed up\nmy dev-cycle I tend to make liberal use of snapshots.\n\n## Install base image and detach config-drive\n\n\t$ kup-domain-install 1 2 3 4\n\t# wait for machine(s) to provision and power off\n\t$ kup-detach-config-drive 1 2 3 4\n\tDisk detached successfully\n\tDisk detached successfully\n\tDisk detached successfully\n\tDisk detached successfully\n\n## Take a snapshot\n\nPrep the machine(s) in some way, then take a snapshot so we can eaily\nrevert:\n\n\t$ SNAPSHOT=baseinstall kup-snapshot-create 1\n\tDomain snapshot baseinstall created\n\n\t$ kup-domain-start 1\n\tDomain centos7-vm-1 started\n\n\t$ ansible-playbook -i centos7-vm-1, /path/to/playbook.yaml\n\n\t$ kup-domain-stop 1\n\tDomain centos7-vm-1 destroyed\n\n\t$ SNAPSHOT=pkg-refresh kup-snapshot-create 1\n\tDomain snapshot pkg-refresh created\n\n## List snapshots\n\n\t$ kup-snapshot-list 1 2 3 4\n\n\tName                 Creation Time             State\n\t------------------------------------------------------------\n\tbaseinstall          2018-02-21 11:43:01 +0000 shutoff\n\tpkg-refresh          2018-02-21 11:43:40 +0000 shutoff\n\n\tName                 Creation Time             State\n\t------------------------------------------------------------\n\n\tName                 Creation Time             State\n\t------------------------------------------------------------\n\n\tName                 Creation Time             State\n\t------------------------------------------------------------\n\nHere you can see I've only created snapshots for the machine\nidentified as `1`. But really the whole point of these scripts is to\ntake the same action against multiple machines:\n\n\t$ SNAPSHOT=baseinstall kup-snapshot-create 1 2 3 4\n\tDomain snapshot baseinstall created\n\tDomain snapshot baseinstall created\n\tDomain snapshot baseinstall created\n\tDomain snapshot baseinstall created\n\n## Reverting/Selecting a snapshot by name\n\n\t$ kup-domain-stop 1\n\t$ SNAPSHOT=baseinstall kup-snapshot-select 1\n\t$ kup-domain-start 1\n\n## Updating a snapshot\n\n\t$ kup-domain-stop 1\n\t$ SNAPSHOT=baseinstall kup-snapshot-select 1\n\t# Start from a known-good place.\n\t$ kup-domain-start 1\n\t\n\t# login, do some corrections in \"baseinstall\", then snapshot again.\n\t$ kup-domain-stop 1\n\t$ SNAPSHOT=pkg-refresh kup-snapshot-replace 1\n\n# Installation\n\nPlease read the [INSTALL](INSTALL.md) companion.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffrobware%2Fkvm-cluster-up","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffrobware%2Fkvm-cluster-up","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffrobware%2Fkvm-cluster-up/lists"}