{"id":43759071,"url":"https://github.com/bcgov/backup-container","last_synced_at":"2026-02-05T15:00:55.679Z","repository":{"id":33703768,"uuid":"149611400","full_name":"bcgov/backup-container","owner":"bcgov","description":"A simple container for a simple backup strategy. ","archived":false,"fork":false,"pushed_at":"2026-02-04T23:38:52.000Z","size":672,"stargazers_count":41,"open_issues_count":30,"forks_count":62,"subscribers_count":5,"default_branch":"master","last_synced_at":"2026-02-05T11:41:45.204Z","etag":null,"topics":["backup","backup-container","backup-script","backup-strategies","cronjob","openshift","postgres","postgresql","restore","verification"],"latest_commit_sha":null,"homepage":null,"language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/bcgov.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2018-09-20T13:12:04.000Z","updated_at":"2026-02-04T23:38:56.000Z","dependencies_parsed_at":"2026-02-05T15:00:35.850Z","dependency_job_id":null,"html_url":"https://github.com/bcgov/backup-container","commit_stats":null,"previous_names":["bcgov/backup-container","bcdevops/backup-container"],"tags_count":30,"template":false,"template_full_name":null,"purl":"pkg:github/bcgov/backup-container","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bcgov%2Fbackup-container","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bcgov%2Fbackup-container/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bcgov%2Fbackup-container/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bcgov%2Fbackup-container/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/bcgov","download_url":"https://codeload.github.com/bcgov/backup-container/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bcgov%2Fbackup-container/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29124793,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-05T14:05:12.718Z","status":"ssl_error","status_checked_at":"2026-02-05T14:03:53.078Z","response_time":65,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["backup","backup-container","backup-script","backup-strategies","cronjob","openshift","postgres","postgresql","restore","verification"],"created_at":"2026-02-05T15:00:24.560Z","updated_at":"2026-02-05T15:00:55.665Z","avatar_url":"https://github.com/bcgov.png","language":"Shell","readme":"---\ntitle: Backup Container\ndescription: A simple containerized backup solution for backing up one or more supported databases to a secondary location.\nauthor: WadeBarnes\nresourceType: Components\npersonas:\n  - Developer\n  - Product Owner\n  - Designer\nlabels:\n  - backup\n  - backups\n  - postgres\n  - mongo\n  - mssql\n  - database\n---\n\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE)\n\n_Table of Contents_\n\n\u003c!-- TOC depthTo:2 --\u003e\n\n- [Introduction](#introduction)\n  - [Supported Databases \\\u0026 Secondary Locations](#supported-databases--secondary-locations)\n    - [Databases](#databases)\n    - [Secondary Locations](#secondary-locations)\n- [Backup Container Options](#backup-container-options)\n  - [Backups in OpenShift](#backups-in-openshift)\n  - [Storage](#storage)\n    - [Backup Storage Volume](#backup-storage-volume)\n    - [Restore / Verification Storage Volume](#restore--verification-storage-volume)\n    - [Storage Performance](#storage-performance)\n  - [Deployment / Configuration](#deployment--configuration)\n    - [backup.conf](#backupconf)\n    - [Cron Mode](#cron-mode)\n    - [Cronjob Deployment / Configuration / Constraints](#cronjob-deployment--configuration--constraints)\n    - [Resources](#resources)\n  - [Multiple Databases](#multiple-databases)\n  - [Backup Strategies](#backup-strategies)\n    - [Daily](#daily)\n    - [Rolling](#rolling)\n  - [Using the Backup Script](#using-the-backup-script)\n  - [Using Backup Verification](#using-backup-verification)\n  - [Using the FTP backup](#using-the-ftp-backup)\n  - [Using the Webhook Integration](#using-the-webhook-integration)\n  - [Database Plugin Support](#database-plugin-support)\n  - [Backup](#backup)\n    - [Immediate Backup:](#immediate-backup)\n      - [Execute a single backup cycle with the pod deployment](#execute-a-single-backup-cycle-with-the-pod-deployment)\n      - [Execute an on demand backup using the scheduled job](#execute-an-on-demand-backup-using-the-scheduled-job)\n    - [Restore](#restore)\n  - [Network Policies](#network-policies)\n- [Example Deployments](#example-deployments)\n  - [Deploy with Helm Chart](#deploy-with-helm-chart)\n- [Prebuilt Container Images](#prebuilt-container-images)\n- [Postgres Base Version](#postgres-base-version)\n- [Tip and Tricks](#tip-and-tricks)\n- [Getting Help or Reporting an Issue](#getting-help-or-reporting-an-issue)\n- [How to Contribute](#how-to-contribute)\n\n\u003c!-- /TOC --\u003e\n\n# Introduction\n\nThis backup system is a straightforward containerized solution designed to back up one or more supported databases to a secondary location.\n## Supported Databases \u0026 Secondary Locations\n\n### Databases\n\n- PostgresSQL\n- MongoDB\n- MariaDB\n- MSSQL - Currently MSSQL requires that the nfs db volume be shared with the database for backups to function correctly.\n\n### Secondary Locations\n\n- OCIO backup infrastructure\n- Amazon S3 (S3 Compatible / OCIO Object Store)\n- FTP Server\n\n# Backup Container Options\n\nYou have the option to run the Backup Container for supported databases either separately or in a mixed environment. If you choose the mixed environment, please follow these guidelines:\n\n1. It is required to use the recommended `backup.conf` configuration.\n2. Within the `backup.conf` file, make sure to specify the `DatabaseType` for each listed database.\n3. For each type of supported backup container being used, you will need to create a build and deployment configuration.\n4. Make sure to mount the same `backup.conf` file (ConfigMap) to each deployed container.\n\nThese steps will help ensure the smooth operation of the backup system.\n\n## Backups in OpenShift\n\nThis project provides you with a starting point for integrating backups into your OpenShift projects. The scripts and templates provided in the [openshift](./openshift) directory are compatible with the [openshift-developer-tools](https://github.com/BCDevOps/openshift-developer-tools) scripts. They help you create an OpenShift deployment or cronjob called `backup` in your projects that runs backups on databases within the project environment. You only need to integrate the scripts and templates into your project(s), the builds can be done with this repository as the source.\n\nAs an alternative to using the command line interface `oc` ([OpenShift CLI](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/cli_tools/index)), you can integrate the backup configurations (Build and Deployment templates, override script, and config) directly into your project configuration and manage the publishing and updating of the Build and Deployment configurations using the [BCDevOps/openshift-developer-tools](https://github.com/BCDevOps/openshift-developer-tools/tree/master/bin) scripts. An example can be found in the [bcgov/orgbook-configurations](https://github.com/bcgov/orgbook-configurations) repository under the [backup templates folder](https://github.com/bcgov/orgbook-configurations/tree/master/openshift/templates/backup).\n\nSimplified documentation on how to use the tools can be found [here](https://github.com/bcgov/jag-cullencommission/tree/master/openshift). All scripts support a `-c` option that allows you to perform operations on a single component of your application such as the backup container. In the orgbook-configurations example above, note the `-c backup` argument supplied.\n\nFollowing are the instructions for running the backups and a restore.\n\n## Storage\n\nThe backup container utilizes two volumes: one for storing the backups and another for restore/verification testing. The deployment template deliberately separates these volumes.\n\nThe upcoming sections on storage will provide you with recommendations and limitations regarding the storage classes.\n\n### Backup Storage Volume\n\nWe recommend using the `netapp-file-backup` storage class for the backup Persistent Volume Claim (PVC). This storage class is supported by the standard OCIO backup infrastructure and has a default quota of 25Gi. If you require additional storage, please submit an iStore request to adjust the quota accordingly. The backup retention policy for the backup infrastructure is as follows:\n\n- Backup:\n  - Daily: Incremental\n  - Monthly: Full\n- Retention: 90 days\n\nIf you are utilizing S3 storage or the corporate S3 compatible storage, you may not need to use the `netapp-file-backup` storage class. These systems are already replicated and highly redundant. In such cases, we recommend using the following storage classes:\n- `netapp-file-standard` for backup storage\n- `netapp-file-standard` for restore/verification storage\n\nTo implement this, create a PVC using the the appropriate storage class and mount it to your pod at the `/backups` mount point. Or, if you're using the provided deployment template, update or override the `BACKUP_VOLUME_STORAGE_CLASS` parameter.\n\nFor more detailed information, please visit the [DevHub](https://developer.gov.bc.ca/docs/default/component/platform-developer-docs/docs/automation-and-resiliency/netapp-backup-restore/) page.\n\n### Restore / Verification Storage Volume\n\nThe restore/verification volume should use the default storage class `netapp-file-standard`. Please avoid using `netapp-file-backup` as it is not suitable for transient workloads. The provided deployment template will automatically provision this volume when it is published.\n\nEnsure that the volume is large enough to accommodate your largest database. You can set the size by updating or overriding the `VERIFICATION_VOLUME_SIZE` parameter in the provided OpenShift template.\n\n### Storage Performance\n\nOur PVC are supported by NetApp storage. It's important to note that the performance of the storage is not affected by the storage class chosen.\n## Deployment / Configuration\n\nTogether, the scripts and templates provided in the [openshift](./openshift) directory will automatically deploy the `backup` app as described below. The [backup-deploy.overrides.sh](./openshift/backup-deploy.overrides.sh) script generates the deployment configuration necessary for the [backup.conf](config/backup.conf) file to be mounted as a ConfigMap by the `backup` container.\n\nThe following environment variables are defaults used by the `backup` app.\n\n**NOTE**: These environment variables MUST MATCH those used by the database container(s) you are planning to backup.\n\n| Name                       | Default (if not set) | Purpose                                                                                                                                                                                                                                                                                                                                                                       |\n| -------------------------- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| BACKUP_STRATEGY            | rolling              | To control the backup strategy used for backups. This is explained more below.                                                                                                                                                                                                                                                                                                |\n| BACKUP_DIR                 | /backups/            | The directory under which backups will be stored. The deployment configuration mounts the persistent volume claim to this location when first deployed.                                                                                                                                                                                                                       |\n| NUM_BACKUPS                | 31                   | Used for backward compatibility only, this value is used with the daily backup strategy to set the number of backups to retain before pruning.                                                                                                                                                                                                                                |\n| DAILY_BACKUPS              | 6                    | When using the rolling backup strategy this value is used to determine the number of daily (Mon-Sat) backups to retain before pruning.                                                                                                                                                                                                                                        |\n| WEEKLY_BACKUPS             | 4                    | When using the rolling backup strategy this value is used to determine the number of weekly (Sun) backups to retain before pruning.                                                                                                                                                                                                                                           |\n| MONTHLY_BACKUPS            | 1                    | When using the rolling backup strategy this value is used to determine the number of monthly (last day of the month) backups to retain before pruning.                                                                                                                                                                                                                        |\n| BACKUP_PERIOD              | 1d                   | Only used for Legacy Mode. Ignored when running in Cron Mode. The schedule on which to run the backups. The value is used by a sleep command and can be defined in d, h, m, or s.                                                                                                                                                                                             |\n| DATABASE_SERVICE_NAME      | postgresql           | Used for backward compatibility only. The name of the service/host for the _default_ database target.                                                                                                                                                                                                                                                                         |\n| DATABASE_USER_KEY_NAME     | database-user        | The database user key name stored in database deployment resources specified by DATABASE_DEPLOYMENT_NAME.                                                                                                                                                                                                                                                                     |\n| DATABASE_PASSWORD_KEY_NAME | database-password    | The database password key name stored in database deployment resources specified by DATABASE_DEPLOYMENT_NAME.                                                                                                                                                                                                                                                                 |\n| DATABASE_NAME              | my_postgres_db       | Used for backward compatibility only. The name of the _default_ database target; the name of the database you want to backup.                                                                                                                                                                                                                                                 |\n| DATABASE_USER              | _wired to a secret_  | The username for the database(s) hosted by the database server. The deployment configuration makes the assumption you have your database credentials stored in secrets (which you should), and the key for the username is `database-user`. The name of the secret must be provided as the `DATABASE_DEPLOYMENT_NAME` parameter to the deployment configuration template.     |\n| DATABASE_PASSWORD          | _wired to a secret_  | The password for the database(s) hosted by the database server. The deployment configuration makes the assumption you have your database credentials stored in secrets (which you should), and the key for the username is `database-password`. The name of the secret must be provided as the `DATABASE_DEPLOYMENT_NAME` parameter to the deployment configuration template. |\n| FTP_URL                    |                      | The FTP server URL. If not specified, the FTP backup feature is disabled. The default value in the deployment configuration is an empty value - not specified.                                                                                                                                                                                                                |\n| FTP_USER                   | _wired to a secret_  | The username for the FTP server. The deployment configuration creates a secret with the name specified in the FTP_SECRET_KEY parameter (default: `ftp-secret`). The key for the username is `ftp-user` and the value is an empty value by default.                                                                                                                            |\n| FTP_PASSWORD               | _wired to a secret_  | The password for the FTP server. The deployment configuration creates a secret with the name specified in the FTP_SECRET_KEY parameter (default: `ftp-secret`). The key for the password is `ftp-password` and the value is an empty value by default.                                                                                                                        |\n| S3_USER                    | No Default           | The username for the S3 compatible object store. This may also be referred to as the \"Access key\" in AWS S3. |\n| S3_PASSWORD                | No Default           | The password for the S3 compatible object store. This may also be referred to as the \"Secret key\" in AWS. |\n| S3_ENDPOINT                | None                 |  The AWS endpoint to use for S3 compatible object storage. For OpenShift minio use `http://minio-service:9000` |\n| S3_BUCKET                  | None                 | The bucket where you backups will be transferd to. |\n| PGDUTY_SVC_KEY             |                      | PagerDuty service integration key.                                                                                                         |\n| PGDUTY_URL                 |                      | PagerDuty events API url, the default url (the default url is https://events.pagerduty.com/generic/2010-04-15/create_event.json)                                                                                                         |\n| WEBHOOK_URL                |                      | The URL of the webhook endpoint to use for notifications. If not specified, the webhook integration feature is disabled. The default value in the deployment configuration is an empty value - not specified. |\n| ENVIRONMENT_FRIENDLY_NAME  |                      | A friendly (human readable) name of the environment. This variable is used by the webhook integration to identify the environment from which the backup notifications originate. The default value in the deployment configuration is an empty value - not specified. |\n| ENVIRONMENT_NAME           |                      | A name or ID of the environment. This variable is used by the webhook integration to identify the environment from which the backup notifications originate. The default value in the deployment configuration is an empty value - not specified. |\n\n### backup.conf\n\nUsing this default configuration you can easily back up a single postgres database, however we recommend you extend the configuration and use the `backup.conf` file to list a number of databases for backup and even set a cron schedule for the backups.\n\nWhen using the `backup.conf` file the following environment variables are ignored, since you list all of your `host`/`database` pairs in the file; `DATABASE_SERVICE_NAME`, `DATABASE_NAME`. To provide the credentials needed for the listed databases you extend the deployment configuration to include `hostname_USER` and `hostname_PASSWORD` credential pairs which are wired to the appropriate secrets (where hostname matches the hostname/servicename, in all caps and underscores, of the database). For example, if you are backing up a database named `wallet-db/my_wallet`, you would have to extend the deployment configuration to include a `WALLET_DB_USER` and `WALLET_DB_PASSWORD` credential pair, wired to the appropriate secrets, to access the database(s) on the `wallet-db` server.\n\n### Cron Mode\n\nThe `backup` container supports running the backups on a cron schedule. The schedule is specified in the `backup.conf` file. Refer to the [backup.conf](./config/backup.conf) file for additional details and examples.\n\n### Cronjob Deployment / Configuration / Constraints\n\n_This section describes the configuration of an OpenShift CronJob this is different than the Cron Mode supported by the container when deployed in \"long running\" mode._\n\nThe cronjob object can be deployed in the same manner as the application, and will also have a dependency on the image built by the build config. The main constraint for the cronjob objects is that they will require a configmap in place of environment variables and does not support the `backup.conf` for multiple database backups in the same job. In order to backup multiple databases, create multiple cronjob objects with their associated configmaps and secrets.\n\nThe following variables are supported in the first iteration of the backup cronjob:\n\n| Name                       | Default (if not set) | Purpose                                                                                                                                                                                                                                                                                                                                                                                    |\n| -------------------------- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |\n| BACKUP_STRATEGY            | daily                | To control the backup strategy used for backups. This is explained more below.                                                                                                                                                                                                                                                                                                             |\n| BACKUP_DIR                 | /backups/            | The directory under which backups will be stored. The deployment configuration mounts the persistent volume claim to this location when first deployed.                                                                                                                                                                                                                                    |\n| SCHEDULE                   | 0 1 \\* \\* \\*         | Cron Schedule to Execute the Job (using local cluster system TZ).                                                                                                                                                                                                                                                                                                                          |\n| NUM_BACKUPS                | 31                   | For backward compatibility this value is used with the daily backup strategy to set the number of backups to retain before pruning.                                                                                                                                                                                                                                                        |\n| DAILY_BACKUPS              | 6                    | When using the rolling backup strategy this value is used to determine the number of daily (Mon-Sat) backups to retain before pruning.                                                                                                                                                                                                                                                     |\n| WEEKLY_BACKUPS             | 4                    | When using the rolling backup strategy this value is used to determine the number of weekly (Sun) backups to retain before pruning.                                                                                                                                                                                                                                                        |\n| MONTHLY_BACKUPS            | 1                    | When using the rolling backup strategy this value is used to determine the number of monthly (last day of the month) backups to retain before pruning.                                                                                                                                                                                                                                     |\n| DATABASE_SERVICE_NAME      | postgresql           | The name of the service/host for the _default_ database target.                                                                                                                                                                                                                                                                                                                            |\n| DATABASE_USER_KEY_NAME     | database-user        | The database user key name stored in database deployment resources specified by DATABASE_DEPLOYMENT_NAME.                                                                                                                                                                                                                                                                                  |\n| DATABASE_PASSWORD_KEY_NAME | database-password    | The database password key name stored in database deployment resources specified by DATABASE_DEPLOYMENT_NAME.                                                                                                                                                                                                                                                                              |\n| POSTGRESQL_DATABASE        | my_postgres_db       | The name of the _default_ database target; the name of the database you want to backup.                                                                                                                                                                                                                                                                                                    |\n| POSTGRESQL_USER            | _wired to a secret_  | The username for the database(s) hosted by the `postgresql` Postgres server. The deployment configuration makes the assumption you have your database credentials stored in secrets (which you should), and the key for the username is `database-user`. The name of the secret must be provided as the `DATABASE_DEPLOYMENT_NAME` parameter to the deployment configuration template.     |\n| POSTGRESQL_PASSWORD        | _wired to a secret_  | The password for the database(s) hosted by the `postgresql` Postgres server. The deployment configuration makes the assumption you have your database credentials stored in secrets (which you should), and the key for the username is `database-password`. The name of the secret must be provided as the `DATABASE_DEPLOYMENT_NAME` parameter to the deployment configuration template. |\n\nThe following variables are NOT supported:\n\n| Name          | Default (if not set) | Purpose                                                                                                  |\n| ------------- | -------------------- | -------------------------------------------------------------------------------------------------------- |\n| BACKUP_PERIOD | 1d                   | The schedule on which to run the backups. The value is replaced by the cron schedule variable (SCHEDULE) |\n\nThe scheduled job does not yet support the FTP environment variables.\n\n| Name         |\n| ------------ |\n| FTP_URL      |\n| FTP_USER     |\n| FTP_PASSWORD |\n\n### Resources\n\nThe backup-container is assigned with `Best-effort` resource type (setting zero for request and limit), which allows the resources to scale up and down without an explicit limit as resource on the node allow. It benefits from large bursts of recourses for short periods of time to get things more quickly. After some time of running the backup-container, you could then set the request and limit according to the average resource consumption.\n\n## Multiple Databases\n\nWhen backing up multiple databases, the retention settings apply to each database individually. For instance if you use the `daily` strategy and set the retention number(s) to 5, you will retain 5 copies of each database. So plan your backup storage accordingly.\n\nAn example of the backup container in action can be found here; [example log output](./docs/ExampleLog.md)\n\n## Backup Strategies\n\nThe `backup` app supports two backup strategies, each are explained below. Regardless of the strategy backups are identified using a core name derived from the `host/database` specification and a timestamp. All backups are compressed using gzip.\n\n### Daily\n\nThe daily backup strategy is very simple. Backups are created in dated folders under the top level `/backups/` folder. When the maximum number of backups (`NUM_BACKUPS`) is exceeded, the oldest ones are pruned from disk.\n\nFor example (faked):\n\n```\n================================================================================================================================\nCurrent Backups:\n--------------------------------------------------------------------------------------------------------------------------------\n1.0K    2018-10-03 22:16        ./backups/2018-10-03/postgresql-TheOrgBook_Database_2018-10-03_22-16-11.sql.gz\n1.0K    2018-10-03 22:16        ./backups/2018-10-03/postgresql-TheOrgBook_Database_2018-10-03_22-16-28.sql.gz\n1.0K    2018-10-03 22:16        ./backups/2018-10-03/postgresql-TheOrgBook_Database_2018-10-03_22-16-46.sql.gz\n1.0K    2018-10-03 22:16        ./backups/2018-10-03/wallet-db-tob_holder_2018-10-03_22-16-13.sql.gz\n1.0K    2018-10-03 22:16        ./backups/2018-10-03/wallet-db-tob_holder_2018-10-03_22-16-31.sql.gz\n1.0K    2018-10-03 22:16        ./backups/2018-10-03/wallet-db-tob_holder_2018-10-03_22-16-48.sql.gz\n1.0K    2018-10-03 22:16        ./backups/2018-10-03/wallet-db-tob_verifier_2018-10-03_22-16-08.sql.gz\n1.0K    2018-10-03 22:16        ./backups/2018-10-03/wallet-db-tob_verifier_2018-10-03_22-16-25.sql.gz\n1.0K    2018-10-03 22:16        ./backups/2018-10-03/wallet-db-tob_verifier_2018-10-03_22-16-43.sql.gz\n13K     2018-10-03 22:16        ./backups/2018-10-03\n...\n61K     2018-10-04 10:43        ./backups/\n================================================================================================================================\n```\n\n### Rolling\n\nThe rolling backup strategy provides a bit more flexibility. It allows you to keep a number of recent `daily` backups, a number of `weekly` backups, and a number of `monthly` backups.\n\n- Daily backups are any backups done Monday through Saturday.\n- Weekly backups are any backups done at the end of the week, which we're calling Sunday.\n- Monthly backups are any backups done on the last day of a month.\n\nThere are retention settings you can set for each. The defaults provide you with a week's worth of `daily` backups, a month's worth of `weekly` backups, and a single backup for the previous month.\n\nAlthough the example does not show any `weekly` or `monthly` backups, you can see from the example that the folders are further broken down into the backup type.\n\nFor example (faked):\n\n```\n================================================================================================================================\nCurrent Backups:\n--------------------------------------------------------------------------------------------------------------------------------\n0       2018-10-03 22:16        ./backups/daily/2018-10-03\n1.0K    2018-10-04 09:29        ./backups/daily/2018-10-04/postgresql-TheOrgBook_Database_2018-10-04_09-29-52.sql.gz\n1.0K    2018-10-04 10:37        ./backups/daily/2018-10-04/postgresql-TheOrgBook_Database_2018-10-04_10-37-15.sql.gz\n1.0K    2018-10-04 09:29        ./backups/daily/2018-10-04/wallet-db-tob_holder_2018-10-04_09-29-55.sql.gz\n1.0K    2018-10-04 10:37        ./backups/daily/2018-10-04/wallet-db-tob_holder_2018-10-04_10-37-18.sql.gz\n1.0K    2018-10-04 09:29        ./backups/daily/2018-10-04/wallet-db-tob_verifier_2018-10-04_09-29-49.sql.gz\n1.0K    2018-10-04 10:37        ./backups/daily/2018-10-04/wallet-db-tob_verifier_2018-10-04_10-37-12.sql.gz\n22K     2018-10-04 10:43        ./backups/daily/2018-10-04\n22K     2018-10-04 10:43        ./backups/daily\n4.0K    2018-10-03 22:16        ./backups/monthly/2018-10-03\n4.0K    2018-10-03 22:16        ./backups/monthly\n4.0K    2018-10-03 22:16        ./backups/weekly/2018-10-03\n4.0K    2018-10-03 22:16        ./backups/weekly\n61K     2018-10-04 10:43        ./backups/\n================================================================================================================================\n```\n\n## Using the Backup Script\n\nThe [backup script](./docker/backup.sh) has a few utility features built into it. For a full list of features and documentation run `backup.sh -h`.\n\nFeatures include:\n\n- The ability to list the existing backups, `backup.sh -l`\n- Listing the current configuration, `backup.sh -c`\n- Running a single backup cycle, `backup.sh -1`\n- Restoring a database from backup, `backup.sh -r \u003cdatabaseSpec/\u003e [-f \u003cbackupFileFilter\u003e]`\n  - Restore mode will allow you to restore a database to a different location (host, and/or database name) provided it can contact the host and you can provide the appropriate credentials.\n- Verifying backups, `backup.sh [-s] -v \u003cdatabaseSpec/\u003e [-f \u003cbackupFileFilter\u003e]`\n  - Verify mode will restore a backup to the local server to ensure it can be restored without error. Once restored a table query is performed to ensure there was at least one table restored and queries against the database succeed without error. All database files and configuration are destroyed following the tests.\n\n## Using Backup Verification\n\nThe [backup script](./docker/backup.sh) supports running manual or scheduled verifications on your backups; `backup.sh [-s] -v \u003cdatabaseSpec/\u003e [-f \u003cbackupFileFilter\u003e]`. Refer to the script documentation `backup.sh -h`, and the configuration documentation, [backup.conf](config/backup.conf), for additional details on how to use this feature.\n\n## Using the FTP backup\n\n- The FTP backup feature is enabled by specifying the FTP server URL `FTP_URL`.\n- The FTP server must support FTPS.\n- Path can be added to the URL. For example, the URL can be `ftp://ftp.gov.bc.ca/schoolbus-db-backup/`. Note that when adding path, the URL must be ended with `/` as the example.\n- The username and password must be populated in the secret key. Refer to the deployment configuration section.\n- There is a known issue for FTPS with Windows 2012 FTP. http://redoubtsolutions.com/fix-the-supplied-message-is-incomplete-error-when-you-use-an-ftps-client-to-upload-a-file-in-windows/\n\n## Using the Webhook Integration\n\nThe Webhook integration feature is enabled by specifying the webhook URL, `WEBHOOK_URL`, in your configuration. It's recommended that you also provide values for `ENVIRONMENT_FRIENDLY_NAME` and `ENVIRONMENT_NAME`, so you can better identify the environment from which the messages originate and do things like produce links to the environment.\n\nThe Webhook integration feature was built with Rocket.Chat in mind and an integration script for Rocket.Chat can be found in [rocket.chat.integration.js](./scripts/rocket.chat.integration.js). This script was developed to support the BC OpenShift Pathfinder environment and will format the notifications from the backup script into Rocket.Chat messages (examples below). If you provide values for the environment name (`ENVIRONMENT_FRIENDLY_NAME` and `ENVIRONMENT_NAME`) hyperlinks will be added to the messages to link you to the pathfinder project console.\n\nSample Message:\n\n![Sample Message](./docs/SampleRocketChatMessage.png)\n\nSample Error Message:\n\n![Sample Erros Message](./docs/SampleRocketChatErrorMessage.png)\n\nFor information on how setup a webhook in Rocket.Chat refer to [Incoming WebHook Scripting](https://rocket.chat/docs/administrator-guides/integrations/). The **Webhook URL** created during this process is the URL you use for `WEBHOOK_URL` to enable the Webhook integration feature.\n\n## Database Plugin Support\n\nThe backup container uses a plugin architecture to perform the database specific operations needed to support various database types.\n\nThe plugins are loaded dynamically based on the container type. By default the `backup.null.plugin` will be loaded when the container type is not recognized.\n\nTo add support for a new database type:\n\n1. Update the `getContainerType` function in [backup.container.utils](./docker/backup.container.utils) to detect the new type of database.\n2. Using the existing plugins as reference, implement the database specific scripts for the new database type.\n3. Using the existing docker files as reference, create a new one to build the new container type.\n4. Update the build and deployment templates and their documentation as needed.\n5. Update the project documentation as needed.\n6. Test, test, test.\n7. Submit a PR.\n\nPlugin Examples:\n\n- [backup.postgres.plugin](./docker/backup.postgres.plugin)\n\n  - Postgres backup implementation.\n\n- [backup.mongo.plugin](./docker/backup.mongo.plugin)\n\n  - Mongo backup implementation.\n\n- [backup.mssql.plugin](./docker/backup.mssql.plugin)\n\n  - MSSQL backup implementation.\n\n- [backup.mariadb.plugin](./docker/backup.mariadb.plugin)\n\n  - MariaDB backup implementation. This plugin should also work with mysql, but is currently untested.\n\n- [backup.null.plugin](./docker/backup.null.plugin)\n  - Sample/Template backup implementation that simply outputs log messages for the various operations.\n\n## Backup\n\n_The following sections describes (some) postgres specific implementation, however the steps are generally the same between database implementations._\n\nThe purpose of the backup app is to do automatic backups. Deploy the Backup app to do daily backups. Viewing the Logs for the Backup App will show a record of backups that have been completed.\n\nThe Backup app performs the following sequence of operations:\n\n1. Create a directory that will be used to store the backup.\n2. Use the `pg_dump` and `gzip` commands to make a backup.\n3. Cull backups more than $NUM_BACKUPS (default 31 - configured in deployment script)\n4. Wait/Sleep for a period of time and repeat\n\nNote that with the pod deployment, we support cron schedule(s) or the legacy mode (which uses a simple \"sleep\") to run the backup periodically. With the OpenShift Scheduled Job deployment, use the backup-cronjob.yaml template and set the schedule via the OpenShift cronjob object SCHEDULE template parameter.\n\nA separate pod is used vs. having the backups run from the Postgres Pod for fault tolerant purposes - to keep the backups separate from the database storage. We don't want to, for example, lose the storage of the database, or have the database and backups storage fill up, and lose both the database and the backups.\n\n### Immediate Backup:\n\n#### Execute a single backup cycle with the pod deployment\n\n- Check the logs of the Backup pod to make sure a backup isn't run right now (pretty unlikely...)\n- Open a terminal window to the pod\n- Run `backup.sh -1`\n  - This will run a single backup cycle and exit.\n\n#### Execute an on demand backup using the scheduled job\n\n- Run the following: `oc create job ${SOMEJOBNAME} --from=cronjob/${BACKUP_CRONJOB_NAME}`\n  - example: `oc create job my-backup-1 --from=cronjob/backup-postgresql`\n  - this will run a single backup job and exit.\n  - note: the jobs created in this manner are NOT cleaned up by the scheduler like the automated jobs are.\n\n### Restore\n\nThe `backup.sh` script's restore mode makes it very simple to restore the most recent backup of a particular database. It's as simple as running a the following command, for example (run `backup.sh -h` for full details on additional options);\n\n    backup.sh -r postgresql/TheOrgBook_Database\n\nFollowing are more detailed steps to perform a restore of a backup.\n\n1. Log into the OpenShift Console and log into OpenShift on the command shell window.\n   1. The instructions here use a mix of the console and command line, but all could be done from a command shell using \"oc\" commands.\n1. Scale to 0 all Apps that use the database connection.\n   1. This is necessary as the Apps will need to restart to pull data from the restored backup.\n   1. It is recommended that you also scale down to 0 your client application so that users know the application is unavailable while the database restore is underway.\n      1. A nice addition to this would be a user-friendly \"This application is offline\" message - not yet implemented.\n1. Restart the database pod as a quick way of closing any other database connections from users using port forward or that have rsh'd to directly connect to the database.\n1. Open an rsh into the backup pod:\n   1. Open a command prompt connection to OpenShift using `oc login` with parameters appropriate for your OpenShift host.\n   1. Change to the OpenShift project containing the Backup App `oc project \u003cProject Name\u003e`\n   1. List pods using `oc get pods`\n   1. Open a remote shell connection to the **backup** pod. `oc rsh \u003cBackup Pod Name\u003e`\n1. In the rsh run the backup script in restore mode, `./backup.sh -r \u003cDatabaseSpec/\u003e`, to restore the desired backup file. For full information on how to use restore mode, refer to the script documentation, `./backup.sh -h`. Have the Admin password for the database handy, the script will ask for it during the restore process.\n   1. The restore script will automatically grant the database user access to the restored database. If there are other users needing access to the database, such as the DBA group, you will need to additionally run the following commands on the database pod itself using `psql`:\n      1. Get a list of the users by running the command `\\du`\n      1. For each user that is not \"postgres\" and $POSTGRESQL_USER, execute the command `GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"\u003cname of user\u003e\";`\n   1. If users have been set up with other grants, set them up as well.\n1. Verify that the database restore worked\n   1. On the database pod, query a table - e.g the USER table: `SELECT * FROM \"SBI_USER\";` - you can look at other tables if you want.\n   1. Verify the expected data is shown.\n1. Exit remote shells back to your local command line\n1. From the Openshift Console restart the app:\n   1. Scale up any pods you scaled down and wait for them to finish starting up. View the logs to verify there were no startup issues.\n1. Verify full application functionality.\n\nDone!\n\n## Network Policies\n\nThe default `backup-container` template contains a basic Network Policy that is designed to be functioning out-of-the-box for most standard deployments. It provides:\n- Internal traffic authorization towards target databases: for this to work, the target database deployments must be in the same namespace/environment AND must be labelled with `backup=true`.\n\nThe default Network Policy is meant to be a \"one size fits all\" starter policy to facilitate standing up the `backup-container` in a new environment. Please consider updating/tweaking it to better fit your needs, depending on your setup.\n\n# Example Deployments\n\n\u003cdetails\u003e\u003csummary\u003eExample of a Postgres deployment\u003c/summary\u003e\n\nThe following outlines the deployment of a simple backup of three PostgreSQL databases in the same project namespace, on OCP v4.x.\n\n1. As per OCP4 [docs](https://developer.gov.bc.ca/OCP4-Backup-and-Restore), 25G of the storage class `netapp-file-backup` is the default quota. If this is insufficient, you may [request](https://github.com/BCDevOps/devops-requests/issues/new/choose) more.\n\n2. `git clone https://github.com/BCDevOps/backup-container.git \u0026\u0026 cd backup-container`.\n\nCreate the image.\n\n```bash\noc -n 599f0a-tools process -f ./openshift/templates/backup/backup-build.yaml \\\n  -p NAME=nrmsurveys-bkup OUTPUT_IMAGE_TAG=v1 | oc -n 599f0a-tools create -f -\n```\n\n3. Configure (./config/backup.conf) (listing your database(s), and setting your cron schedule).\n\n```bash\npostgres=eaofider-postgresql:5432/eaofider\npostgres=pawslimesurvey-postgresql:5432/pawslimesurvey\n\n0 1 * * * default ./backup.sh -s\n0 4 * * * default ./backup.sh -s -v all\n```\n\n4. Configure references to your DB credentials in [backup-deploy.yaml](./openshift/templates/backup/backup-deploy.yaml), replacing the boilerplate `DATABASE_USER` and `DATABASE_PASSWORD` environment variables.\n\n```yaml\n- name: EAOFIDER_POSTGRESQL_USER\n  valueFrom:\n    secretKeyRef:\n      name: eaofider-postgresql\n      key: \"${DATABASE_USER_KEY_NAME}\"\n- name: EAOFIDER_POSTGRESQL_PASSWORD\n  valueFrom:\n    secretKeyRef:\n      name: eaofider-postgresql\n      key: \"${DATABASE_PASSWORD_KEY_NAME}\"\n```\n\nNote that underscores should be used in the environment variable names.\n\n5. Create your customized `./openshift/backup-deploy.overrides.param` parameter file, if required.\n\n6. Deploy the app; here the example namespace is `599f0a-dev` and the app name is `nrmsurveys-bkup`:\n\n```bash\noc -n 599f0a-dev create configmap backup-conf --from-file=./config/backup.conf\noc -n 599f0a-dev label configmap backup-conf app=nrmsurveys-bkup\n\noc -n 599f0a-dev process -f ./openshift/templates/backup/backup-deploy.yaml \\\n  -p NAME=nrmsurveys-bkup \\\n  -p IMAGE_NAMESPACE=599f0a-tools \\\n  -p SOURCE_IMAGE_NAME=nrmsurveys-bkup \\\n  -p TAG_NAME=v1 \\\n  -p BACKUP_VOLUME_NAME=nrmsurveys-bkup-pvc -p BACKUP_VOLUME_SIZE=20Gi \\\n  -p VERIFICATION_VOLUME_SIZE=5Gi \\\n  -p ENVIRONMENT_FRIENDLY_NAME='NRM Survey DB Backups' | oc -n 599f0a-dev create -f -\n```\n\nTo clean up the deployment\n\n```bash\noc -n 599f0a-dev delete pvc/nrmsurveys-bkup-pvc pvc/backup-verification secret/nrmsurveys-bkup secret/ftp-secret dc/nrmsurveys-bkup networkpolicy/nrmsurveys-bkup configmap/backup-conf\n```\n\nTo clean up the image stream and build configuration\n\n```bash\noc -n 599f0a-dev delete buildconfig/nrmsurveys-bkup imagestream/nrmsurveys-bkup\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003eExample of a MongoDB deployment\u003c/summary\u003e\n\nThe following outlines the deployment of a simple backup of a single MongoDB database with backup validation.\n\n1. Decide on amount of backup storage required. While 25Gi is the default quota limit in BC Gov OCP4 provisioned namespaces for `netapp-file-backup`-class storage, teams are able to request more. If you are backing up a non-production environment or an environment outside of BC Gov OCP, you can use a different storage class and thus, different default storage quota. This example assumes that you're using 5Gi of `netapp-file-backup`-class storage.\n2. `git clone https://github.com/BCDevOps/backup-container.git \u0026\u0026 cd backup-container`.\n3. Determine the OpenShift namespace for the image (e.g. `abc123-dev`), the app name (e.g. `myapp-backup`), and the image tag (e.g. `v1`). Then build the image in your `-tools` namespace.\n\n```bash\noc -n abc123-tools process -f ./openshift/templates/backup/backup-build.yaml \\\n  -p DOCKER_FILE_PATH=Dockerfile_Mongo\n  -p NAME=myapp-backup -p OUTPUT_IMAGE_TAG=v1  -p BASE_IMAGE_FOR_BUILD=registry.access.redhat.com/rhscl/mongodb-36-rhel7 | oc -n abc123-tools create -f -\n```\n\n4. Configure `./config/backup.conf`. This defines the database(s) to backup and the schedule that backups are to follow. Additionally, this sets up backup validation (identified by `-v all` flag).\n\n```bash\n# Database(s)\nmongo=myapp-mongodb:27017/mydb\n\n# Cron Schedule(s)\n0 1 * * * default ./backup.sh -s\n0 4 * * * default ./backup.sh -s -v all\n```\n\n5. Configure references to your DB credentials in [backup-deploy.yaml](./openshift/templates/backup/backup-deploy.yaml), replacing the boilerplate `DATABASE_USER` and `DATABASE_PASSWORD` environment variable names. Note the hostname of the database to be backed up. This example uses a hostname of `myapp-mongodb` which maps to environement variables named `MYAPP_MONGODB_USER` and `MYAPP_MONGODB_PASSWORD`. See the [backup.conf](#backupconf) section above for more in depth instructions. This example also assumes that the name of the secret containing your database username and password is the same as the provided `DATABASE_DEPLOYMENT_NAME` parameter. If that's not the case for your service, the secret name can be overridden.\n\n```yaml\n- name: MYAPP_MONGODB_USER\n  valueFrom:\n    secretKeyRef:\n      name: \"${DATABASE_DEPLOYMENT_NAME}\"\n      key: \"${DATABASE_USER_KEY_NAME}\"\n- name: MYAPP_MONGODB_PASSWORD\n  valueFrom:\n    secretKeyRef:\n      name: \"${DATABASE_DEPLOYMENT_NAME}\"\n      key: \"${DATABASE_PASSWORD_KEY_NAME}\"\n```\n\n6. Deploy the app. In this example, the namespace is `abc123-dev` and the app name is `myapp-backup`. Note that the key names within the database secret referencing database username and password are `username` and `password`, respectively. If this is not the case for your deployment, specify the correct key names as parameters `DATABASE_USER_KEY_NAME` and `DATABASE_PASSWORD_KEY_NAME`. Also note that `BACKUP_VOLUME_NAME` is from Step 2 above.\n\n```bash\noc -n abc123-dev create configmap backup-conf --from-file=./config/backup.conf\noc -n abc123-dev label configmap backup-conf app=myapp-backup\n\noc -n abc123-dev process -f ./openshift/templates/backup/backup-deploy.yaml \\\n  -p NAME=myapp-backup \\\n  -p IMAGE_NAMESPACE=abc123-tools \\\n  -p SOURCE_IMAGE_NAME=myapp-backup \\\n  -p TAG_NAME=v1 \\\n  -p BACKUP_VOLUME_NAME=bk-abc123-dev-v9k7xgyvwdxm \\\n  -p BACKUP_VOLUME_SIZE=5Gi \\\n  -p VERIFICATION_VOLUME_SIZE=10Gi \\\n  -p VERIFICATION_VOLUME_CLASS=netapp-file-standard \\\n  -p DATABASE_DEPLOYMENT_NAME=myapp-mongodb \\\n  -p DATABASE_USER_KEY_NAME=username \\\n  -p DATABASE_PASSWORD_KEY_NAME=password \\\n  -p ENVIRONMENT_FRIENDLY_NAME='My App MongoDB Backups' | oc -n abc123-dev create -f -\n\n```\n\n\u003c/details\u003e\n\n## Deploy with Helm Chart\n\n```\nhelm repo add bcgov http://bcgov.github.io/helm-charts\nhelm upgrade --install db-backup-storage bcgov/backup-storage\n```\n\nFor customizing the configuration, go to: https://github.com/bcgov/helm-charts/tree/master/charts/backup-storage\n\n# Prebuilt Container Images\n\nStarting with v2.3.3, prebuilt container images are built and published with each release.  As of v2.10.1 the prebuilt images are published to the ghcr.\n\n- [ghcr.io/bcgov/backup-container](https://github.com/bcgov/backup-container/pkgs/container/backup-container) (`PostgreSQL`)\n- [ghcr.io/bcgov/backup-container-mongo](https://github.com/bcgov/backup-container/pkgs/container/backup-container-mongo)\n- [ghcr.io/bcgov/backup-container-mssql](https://github.com/bcgov/backup-container/pkgs/container/backup-container-mssql)\n- [ghcr.io/bcgov/backup-container-mariadb](https://github.com/bcgov/backup-container/pkgs/container/backup-container-mariadb)\n\n# Postgres Base Version\n\nThe backup container works on top of the base postgres image [here](./docker/Dockerfile)\n\nTo use previous supported versions of postgres - V9 to V12, use the images from the [Prebuilt Container Images](#prebuilt-container-images)\n\n# Tip and Tricks\n\nPlease refer to the [Tips and Tricks](./docs/TipsAndTricks.md) document for solutions to known issues.\n\n# Getting Help or Reporting an Issue\n\nTo report bugs/issues/feature requests, please file an [issue](../../issues).\n\n# How to Contribute\n\nIf you would like to contribute, please see our [CONTRIBUTING](./CONTRIBUTING.md) guidelines.\n\nPlease note that this project is released with a [Contributor Code of Conduct](./CODE_OF_CONDUCT.md).\nBy participating in this project you agree to abide by its terms.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbcgov%2Fbackup-container","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbcgov%2Fbackup-container","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbcgov%2Fbackup-container/lists"}