https://github.com/escomputers/homecloud
General steps for deploying a personal cloud using Nextcloud
https://github.com/escomputers/homecloud
borgbackup docker glacier homecloud nextcloud rclone s3-bucket
Last synced: 7 months ago
JSON representation
General steps for deploying a personal cloud using Nextcloud
- Host: GitHub
- URL: https://github.com/escomputers/homecloud
- Owner: escomputers
- License: apache-2.0
- Created: 2025-03-12T23:10:41.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2025-03-13T19:38:12.000Z (7 months ago)
- Last Synced: 2025-03-13T20:34:16.914Z (7 months ago)
- Topics: borgbackup, docker, glacier, homecloud, nextcloud, rclone, s3-bucket
- Language: Shell
- Homepage:
- Size: 12.7 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
## Minimum Requirements
* Ubuntu >=18 or Debian >=9
* CPU/Memory: 2 CPU/4GB RAM
* Storage: 100GB SSD hard drive
* DNS record A or Cloudflare Tunnel
* HTTP and HTTPS ports opened## Run Nextcloud AIO
```bash
# Make sure to set NEXTCLOUD_DATADIR and NEXTCLOUD_MOUNT paths
docker compose -p homecloud up -d
# Reference: https://github.com/nextcloud/all-in-one?tab=readme-ov-file#nextcloud-all-in-one
```## Encryption at rest
Only newly uploaded files will be encrypted, unless you run encrypt:all command
```bash
docker exec --user www-data -it nextcloud-aio-nextcloud php occ encryption:enabledocker exec --user www-data -it nextcloud-aio-nextcloud php occ encryption:status
# Reference: https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/encryption_configuration.html#occ-encryption-commands
```## Automatic backups and upload to S3 Glacier Deep Archive
First, enable automatic daily backups using AIO interface. Take note of the encryption password for backups.Nextcloud uses BorgBackup as the underlying backup technology. By default, it sets a retention policy of:
- Keep 7 end of day, 4 additional end of week and 6 end of month archives### Server configuration
1. Install required packages
```bash
sudo apt update && sudo apt install -y jq unzip# Install awscli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install# Make sure to use the latest stable version of aws_signing_helper
wget https://rolesanywhere.amazonaws.com/releases/1.4.0/X86_64/Linux/aws_signing_helper
chmod +x aws_signing_helper
sudo mv aws_signing_helper /usr/local/bin/
```2. Install rclone
```bash
sudo -v ; curl https://rclone.org/install.sh | sudo bash
```3. Configure rclone, by changing settings in [rclone.conf file](rclone.conf). Then move it to your rclone config directory, usually `~/.config/rclone/rclone.conf`
4. Setup PKI for AWS login
```bash
# Create a private key for CA certificate
openssl genrsa -out homecloud-root-ca.key 4096# Create CA certificate (valid for 10 years) using an OpenSSL configuration file
# Make sure to change all values inside the [ dn ] SECTION before applying the following command
openssl req -x509 -new -nodes -config certificates/selfsigned-ca.cnf -key homecloud-root-ca.key -days 3650 -out homecloud-root-ca.crt# Create a private key for client certificate
openssl genrsa -out homecloud-client.key 2048### Create client certificate Signing Request
# Make sure that the --subj argument values match the [ dn ] SECTION inside the selfsigned-ca.cnf configuration file before applying the following command
openssl req -new -key homecloud-client.key -out homecloud-client.csr -subj "/C=IT/ST=Ragusa/L=Acate/O=HomeCloud/CN=homecloud.yourdomain.com"### Sign client certificate using CA (valid for 1 year) and use an OpenSSL configuration file
# to apply certificate extensions required by AWS
openssl x509 -req -in homecloud-client.csr -CA homecloud-root-ca.crt -CAkey homecloud-root-ca.key -CAcreateserial -out homecloud-client.crt -days 365 -sha256 -extfile certificates/homecloud-client.cnf -extensions homecloudclient_extensions
```### AWS configuration
1. Create a Roles Anywhere Trust Anchor to estabilish trust between the server and AWS using the Certificate Authority:
- Certificate authority (CA) source = External certificate bundle
- External certificate bundle = Paste the content of homecloud-root-ca.crt into the box
- (Optional) customize Notification settings for certificates expiration alerts2. Create an S3 bucket along with a Lifecycle rule with action "Expire current versions of objects" and set a value of your liking for "Days after object creation" field. This is for removing old tar.gz archives and free-up disk space
3. Create a [IAM Policy](iam/iam-role-policy.json) but change `s3bucketname` to match your S3 bucket name
4. Create a IAM Role:
- use Roles Anywhere as Service Principal
- attach the previously created permission policy to it
- add a [Trust Policy](iam/iam-role-trust-policy.json) but replace `rolesanywhere-trustanchor-arn` with the Trust Anchor ARN created before
- (Optional) customize Maximum session duration value according to your liking (currently 4hrs). Make sure to change the `--session-duration` parameter within [homecloud_backup.sh file](homecloud_backup.sh) accordingly.5. Create a Roles Anywhere Profile:
- select the previously created IAM Role from the dropdown
- (Optional) customize Maximum session duration value according to your liking (currently 4hrs). Make sure to change the `--session-duration` parameter within [homecloud_backup.sh file](homecloud_backup.sh) accordingly.### Backup configuration
1. Change the [ENV file](homecloud_backup.env) according to your setup then:
```bash
sudo mv homecloud_backup.env /etc/homecloud_backup.sh && sudo chmod 600 /etc/homecloud_backup.envsudo mv homecloud_backup.sh /usr/local/bin/homecloud_backup.sh && sudo chmod 644 /usr/local/bin/homecloud_backup.sh
```2. Set a Cronjob to automatically run the backup script
```bash
crontab -e
# Every 10 days at 4:00am
0 4 */10 * * bash /usr/local/bin/homecloud_backup.sh
```## Restore files from S3 Deep Archive
```bash
# List S3 objects with StorageClass Glacier Deep Archive
aws s3api list-objects --bucket | grep "StorageClass" | grep DEEP_ARCHIVE# Change object StorageClass for 2 days from Deep Archive to Standard
aws s3api restore-object \
--bucket \
--key "borg_2025-03-11_22-50-21.tar.gz" \
--restore-request '{"Days":2, "GlacierJobParameters": {"Tier": "Standard"}}'# Check restoration status
aws s3api head-object --bucket --key borg_2025-03-11_22-50-21.tar.gz# Misc
# check S3 bucket usage
aws s3 ls s3:// --recursive --human-readable --summarize
```## Restore Nextcloud data into a new server
Reference: https://github.com/nextcloud/all-in-one?tab=readme-ov-file#how-to-migrate-from-aio-to-aio1. Install a valid SSL certificate on the server:
```bash
# Make sure that no process is running on port 80
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
sudo certbot certonly --standalone
```2. Once you've installed the new AIO Nextcloud instance, start the compose project and then go to https://yourdomain.com:8443/login
3. Copy the tar.gz archive of the Borg repository into the new host. Then extract it and place it into a directory. The extracted directory name must be `borg`. E.g. `/mnt/borg`
4. On AIO Nextcloud Interface webpage, select "Restore AIO instance":
- enter the path of the extracted backup without specifying the directory name. E.g. if backup is placed at `/mnt/borg`, use: `/mnt`
- enter Borg encryption password5. Change domain (if required)
Reference: https://github.com/nextcloud/all-in-one?tab=readme-ov-file#how-to-change-the-domain
```bash
# Replace each occurrence of old domain with the new one inside configuration.json
sudo docker run -it --rm --volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config:rw alpine sh -c "apk add --no-cache nano && nano /mnt/docker-aio-config/data/configuration.json"'overwritehost' => 'newurl.com'
'trusted_domains' => array (0 => 'localhost', 1 => 'newurl.com')
'overwrite.cli.url' => 'https://newurl.com/'
```
After that, restart/start all Nextcloud containers and everything should work as expected