{"id":30688908,"url":"https://github.com/Digicreon/Arkiv","last_synced_at":"2025-09-02T01:04:13.051Z","repository":{"id":145613658,"uuid":"99338430","full_name":"Digicreon/Arkiv","owner":"Digicreon","description":"Backup and archive tool.","archived":false,"fork":false,"pushed_at":"2024-09-16T09:37:34.000Z","size":118,"stargazers_count":46,"open_issues_count":1,"forks_count":4,"subscribers_count":4,"default_branch":"main","last_synced_at":"2025-08-30T02:40:42.640Z","etag":null,"topics":["amazon-glacier","amazon-s3","amazon-s3-storage","arkiv","backup-data","backup-database","backup-files","backup-script","backup-solution","backup-utility","mysql-backup","server-backup","system-administration"],"latest_commit_sha":null,"homepage":"","language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Digicreon.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"COPYING","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2017-08-04T11:55:33.000Z","updated_at":"2025-07-23T08:22:55.000Z","dependencies_parsed_at":null,"dependency_job_id":"a9805fa2-39f1-40ee-9553-a53c93434d79","html_url":"https://github.com/Digicreon/Arkiv","commit_stats":null,"previous_names":["digicreon/arkiv","amaury/arkiv"],"tags_count":7,"template":false,"template_full_name":null,"purl":"pkg:github/Digicreon/Arkiv","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Digicreon%2FArkiv","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Digicreon%2FArkiv/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Digicreon%2FArkiv/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Digicreon%2FArkiv/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Digicreon","download_url":"https://codeload.github.com/Digicreon/Arkiv/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Digicreon%2FArkiv/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":273213992,"owners_count":25065061,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-01T02:00:09.058Z","response_time":120,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["amazon-glacier","amazon-s3","amazon-s3-storage","arkiv","backup-data","backup-database","backup-files","backup-script","backup-solution","backup-utility","mysql-backup","server-backup","system-administration"],"created_at":"2025-09-02T01:04:04.397Z","updated_at":"2025-09-02T01:04:13.029Z","avatar_url":"https://github.com/Digicreon.png","language":"Shell","readme":"Arkiv\n=====\n\nEasy-to-use backup and archive tool.\n\nArkiv is designed to **backup** local files and [MySQL](https://www.mysql.com/) databases, and **archive** them on [Amazon S3](https://aws.amazon.com/s3/) and [Amazon Glacier](https://aws.amazon.com/glacier/).  \nBackup files are removed (locally and from Amazon S3) after defined delays.\n\nArkiv could backup your data on a **daily** or an **hourly** basis (you can choose which day and/or which hours it will be launched).  \nIt is written in pure shell, so it can be used on any Unix/Linux machine.\n\nArkiv was created by [Amaury Bouchard](http://amaury.net) and is [open-source software](#what-is-arkivs-license).\n\n\n************************************************************************\n\nTable of contents\n-----------------\n\n1. [How it works](#1-how-it-works)\n   1. [General Idea](#11-general-idea)\n   2. [Steb-by-step](#12-step-by-step)\n2. [Installation](#2-installation)\n   1. [Prerequisites](#21-prerequisites)\n   2. [Source installation](#22-source-installation)\n   3. [Configuration](#23-configuration)\n3. [Frequently Asked Questions](#3-frequently-asked-questions)\n   1. [Cost and license](#31-cost-and-license)\n   2. [Configuration](#32-configuration)\n   3. [Files backup](#33-files-backup)\n   4. [Output and log](#34-output-and-log)\n   5. [Database backup](#35-database-backup)\n   6. [Crontab](#36-crontab)\n   7. [Miscellaneous](#37-miscellaneous)\n\n\n************************************************************************\n\n## 1. How it works\n\n### 1.1 General idea\n\n- Generate backup data from local files and databases.\n- Store data on the local drive for a few days/weeks, in order to be able to restore fresh data very quickly.\n- Store data on Amazon S3 for a few weeks/months, if you need to restore them easily.\n- Store data on Amazon Glacier for ever. It's an incredibly cheap storage that should be used instead of Amazon S3 for long-term conservancy.\n\nData are deleted from the local drive and Amazon S3 when the configured delays are reached.   \nIf your data are backed up multiple time per day (not just every day), it's possible to define a fine-grained purge of the files stored on the local drive and on Amazon S3.   \nFor example, it's possible to:\n- remove half the backups after two days\n- keep only 2 backups per day after 2 weeks\n- keep 1 backup per day after 3 weeks\n- remove all files after 2 months\n\nThe same kind of configuration could be defined for Amazon S3 archives.\n\n### 1.2 Step-by-step\n\n**Starting**\n1. Arkiv is launched every day (or every hour) by Crontab.\n2. It creates a directory dedicated to the backups of the day (or the backups of the hour).\n\n**Backup**\n1. Each configured path is `tar`'ed and compressed, and the result is stored in the dedicated directory.\n2. *If MySQL backups are configured*, the needed databases are dumped and compressed, in a sub-directory.\n3. *If encryption is configured*, the backup files are encrypted.\n4. Checksums are computed for all the generated files. These checksums are useful to verify that the files are not corrupted after being transfered over a network.\n\n**Archiving**\n1. *If Amazon Glacier is configured*, all the generated backup files (not the checksums file) are sent to Amazon Glacier. For each one of them, a JSON file is created with the response's content; these files are important, because they contain the *archiveId* needed to restore the file.\n2. *If Amazon S3 is configured*, the whole directory (backup files + checksums file + Amazon Glacier JSON files) is copied to Amazon S3.\n\n**Purge**\n1. After a configured delay, backup files are removed from the local disk drive.\n2. *If Amazon S3 is configured*, all backup files are removed from Amazon S3 after a configured delay. The checksums file and the Amazon Glacier JSON files are *not* removed, because they are needed to restore data from Amazon Glacier and check their integrity.\n\n\n************************************************************************\n\n## 2. Installation\n\n### 2.1 Prerequisites\n\n#### 2.1.1 Basic\nSeveral tools are needed by Arkiv to work correctly. They are usually installed by default on every Unix/Linux distributions.\n- A not-so-old [`bash`](https://en.wikipedia.org/wiki/Bash_(Unix_shell)) Shell interpreter located on `/bin/bash` (mandatory)\n- [`tar`](https://en.wikipedia.org/wiki/Tar_(computing)) for files concatenation (mandatory)\n- [`gzip`](https://en.wikipedia.org/wiki/Gzip), [`bzip2`](https://en.wikipedia.org/wiki/Bzip2), [`xz`](https://en.wikipedia.org/wiki/Xz) or [`zstd`](https://en.wikipedia.org/wiki/Zstd) for compression (at least one)\n- [`openssl`](https://en.wikipedia.org/wiki/OpenSSL) for encryption (optional)\n- [`sha256sum`](https://en.wikipedia.org/wiki/Sha256sum) for checksums computation (mandatory)\n- [`tput`](https://en.wikipedia.org/wiki/Tput) for [ANSI text formatting](https://en.wikipedia.org/wiki/ANSI_escape_code) (optional: can be manually deactivated; automatically deactivated if not installed)\n\nTo install these tools on Ubuntu:\n```shell\n# apt-get install tar gzip bzip2 xz-utils openssl coreutils ncurses-bin\n```\n\n#### 2.1.2 Encryption\nIf you want to encrypt the generated backup files (stored locally as well as the ones archived on Amazon S3 and Amazon Glacier), you need to create a symmetric encryption key.\n\nUse this command to do it (you can adapt the destination path):\n```shell\n# openssl rand 32 -out ~/.ssh/symkey.bin\n```\n\n#### 2.1.3 MySQL\nIf you want to backup MySQL databases, you have to install [`mysqldump`](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) or [`xtrabackup`](https://www.percona.com/software/mysql-database/percona-xtrabackup).\n\nTo install `mysqldump` on Ubuntu:\n```shell\n# apt-get install mysql-client\n```\n\nTo install `xtrabackup` on Ubuntu (see [documentation](https://www.percona.com/doc/percona-xtrabackup/2.4/installation/apt_repo.html)):\n```shell\n# wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb\n# dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb\n# apt-get update\n# apt-get install percona-xtrabackup-24\n```\n\n#### 2.1.4 Amazon Web Services\nIf you want to archive the generated backup files on Amazon S3/Glacier, you have to do these things:\n- Create a dedicated bucket on [Amazon S3](https://aws.amazon.com/s3/).\n- If you want to archive on [Amazon Glacier](https://aws.amazon.com/glacier/), create a dedicated vault in the same datacenter.\n- Create an [IAM](https://aws.amazon.com/iam/) user with read-write access to this bucket and this vault (if needed).\n- Install the [AWS-CLI](https://aws.amazon.com/cli/) program and [configure it](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html).\n\nInstall AWS-CLI on Ubuntu:\n```shell\n# apt-get install awscli\n```\n\nConfigure the program (you will be asked for the AWS user's access key and secret key, and the used datacenter):\n```shell\n# aws configure\n```\n\n\n### 2.2 Source Installation\n\nGet the last version:\n```shell\n# wget https://github.com/Digicreon/Arkiv/archive/refs/tags/1.0.0.zip -O Arkiv-1.0.0.zip\n# unzip Arkiv-1.0.0.zip\n\nor\n\n# wget https://github.com/Digicreon/Arkiv/archive/refs/tags/1.0.0.tar.gz -O Arkiv-1.0.0.tar.gz\n# unzip Arkiv-1.0.0.tar.gz\n```\n\n\n### 2.3 Configuration\n\n```shell\n# cd Arkiv-1.0.0\n# ./arkiv config\n```\n\nSome questions will be asked about:\n- If you want a simple installation (one backup per day, everyday, at midnight).\n- The local machine's name (will be used as a subdirectory of the S3 bucket).\n- The used compression type.\n- If you want to encrypt the generated backup files.\n- Which files must be backed up.\n- Everything about MySQL backup (SQL or binary backup, which databases, host/login/password for the connection).\n- Where to store the compressed files resulting of the backup.\n- Where to archive data on Amazon S3 and Amazon Glacier (if you want to).\n- When to purge files (locally and on Amazon S3).\n\nFinally, the program will offer you to add the Arkiv execution to the user's crontab.\n\n\n************************************************************************\n\n## 3. Frequently Asked Questions\n\n### 3.1 Cost and license\n\n#### What is Arkiv's license?\nArkiv is licensed under the terms of the [MIT License](https://en.wikipedia.org/wiki/MIT_License), which is a permissive open-source free software license.\n\nMore in the file `COPYING`.\n\n#### How much will I pay on Amazon S3/Glacier?\nYou can use the [Amazon Web Services Calculator](https://calculator.s3.amazonaws.com/index.html) to estimate the cost depending of your usage.\n\n\n### 3.2 Configuration\n\n#### How to choose the compression type?\nYou can use one of the four common compression tools (`gzip`, `bzip2`, `xz`, `zstd`).\n\nUsually, you can follow these guidelines:\n- Use `zstd` if you want the best compression and decompression speed.\n- Use `xz` if you want the best compression ratio.\n- Use `gzip` or `bzip2` if you want the best portability (`xz` and `zstd` are younger and less widespread).\n\nHere are some helpful links:\n- [Gzip vs Bzip2 vs XZ Performance Comparison](https://www.rootusers.com/gzip-vs-bzip2-vs-xz-performance-comparison/)\n- [Quick Benchmark: Gzip vs Bzip2 vs LZMA vs XZ vs LZ4 vs LZO](https://catchchallenger.first-world.info/wiki/Quick_Benchmark:_Gzip_vs_Bzip2_vs_LZMA_vs_XZ_vs_LZ4_vs_LZO)\n- [Zstandard presentation and benchmarks](https://facebook.github.io/zstd/)\n\nThe default usage is `zstd`, because it has the best compression/speed ratio.\n\n#### I choose simple mode configuration (one backup per day, every day). Why is there a directory called \"00:00\" in the backup directory of the day?\nThis directory means that your Arkiv backup process is launched at midnight.\n\nYou may think that the backed up data should have been stored directly in the directory of the day, without a sub-directory for the hour (because there is only one backup per day). But if someday you'd want to change the configuration and do many backups per day, Arkiv would have trouble to manage purges.\n\n#### How to execute Arkiv with different configurations?\nYou can add the path to the configuration file as a parameter of the program on the command line.\n\nTo generate the configuration file:\n```shell\n# ./arkiv config --config=/path/to/config/file\nor\n# ./arkiv config -c /path/to/config/file\n```\n\nTo launch Arkiv:\n```shell\n# ./arkiv exec --config=/path/to/config/file\nor\n# ./arkiv exec -c /path/to/config/file\n```\n\nYou can modify the Crontab to add the path too.\n\n#### Is it possible to use a public/private key infrastructure for the encryption functionnality?\nIt is not possible to encrypt data with a public key; OpenSSL's [PKI](https://en.wikipedia.org/wiki/Public_key_infrastructure) isn't designed to encrypt large data. Encryption is done using an 256 bits AES algorithm, which is symmetrical.  \nTo ensure that only the owner of a private key would be able to decrypt the data, without transfering this key, you have to encrypt the symmetric key using the public key, and then send the encrypted key to the private key's owner.\n\nHere are the steps to do it (key files are usually located in `~/.ssh/`).\n\nCreate the symmetric key:\n```shell\n# openssl rand 32 -out symkey.bin\n```\n\nConvert the public and private keys to PEM format (usually people have keys in RSA format, using them with [SSH](https://en.wikipedia.org/wiki/Secure_Shell)):\n```shell\n# openssl rsa -in id_rsa -outform pem -out id_rsa.pem\n# openssl rsa -in id_rsa -pubout -outform pem -out id_rsa.pub.pem\n```\n\nEncrypt the symmetric key with the public key:\n```shell\n# openssl rsautl -encrypt -inkey id_rsa.pub.pem -pubin -in symkey.bin -out symkey.bin.encrypt\n```\n\nTo decrypt the encrypted symmetric key using the private key:\n```shell\n# openssl rsautl -decrypt -inkey id_rsa.pem -in symkey.bin.encrypt -out symkey.bin \n```\n\nTo decrypt the data file:\n```shell\n# openssl enc -d -aes-256-cbc -in data.tgz.encrypt -out data.tgz -pass file:symkey.bin\n```\n\n#### Why is it not possible to archive on Amazon Glacier without archiving on Amazon S3?\nWhen you send a file to Amazon Glacier, you get back an *archiveId* (file's unique identifier). Arkiv take this information and write it down in a file; then this file is copied to Amazon S3.\nIf the *archiveId* is lost, you will not be able to get the file back from Amazon Glacier. An archived file that you can't restore is useless. Even if it's possible to get the list of archived files from Amazon Glacier, it's a slow process; it's more flexible to store *archive identifiers* in Amazon S3 (and the cost to store them is insignificant).\n\n\n### 3.3 Files backup\n\n#### How to exclude files and directories from archives?\nArkiv provides several ways to exclude content from archives.\n\nFirst of all, it follows the [CACHEDIR.TAG](https://bford.info/cachedir/) standard. If a directory contains a `CACHEDIR.TAG` file, it will be added to the archive, as well as the `CACHEDIR.TAG` file, but not its other files and subdirectories.\n\nIf you want to exclude the content of a directory in a way similar of the previous one, but you don't want to create a `CACHEDIR.TAG` file (to avoid exclusion of the directory by other programs), you can create an empty `.arkiv-exclude` file in the directory. The directory and the `.arkiv-exclude` will be added to the archive (to keep track of the folder, with the information of the subcontent exclusion), but not the other files and subdirectories contained in the given directory.\n\nIf you want to exclude specific files of a directory, you can create a `.arkiv-ignore` file in the directory, and write a list of exclusion patterns into it. These patterns will be used to exclude files and subdirectories directly stored in the given directory.\n\nIf you create a `.arkiv-ignore-recursive` file in a directory, patterns will be read from this file to define recursive exclusions in the given directory and all its subdirectories.\n\n\n### 3.4 Output and log\n\n#### Is it possible to execute Arkiv without any output on STDOUT and/or STDERR?\nYes, you just have to add some options on the command line:\n- `--no-stdout` (or `-o`) to avoid output on STDOUT\n- `--no-stderr` (or `-e`) to avoid output on STDERR\n\nYou can use these options separately or together.\n\n#### How to write the execution log into a file?\nYou can use a dedicated parameter:\n```shell\n# ./arkiv exec --log=/path/to/log/file\nor\n# ./arkiv exec -l /path/to/log/file\n```\n\nIt will not disable output on the terminal. You can use the options `--no-stdout` and `--no-stderr` for that (see previous answer).\n\n#### How to write log to syslog?\nAdd the option `--syslog` (or `-s`) on the command line or in the Crontab command.\n\n#### How to get pure text (without ANSI commands) in Arkiv's log file?\nAdd the option `--no-ansi` (or `-n`) on the command line or in the Crontab command. It will act on terminal output as well as log file (see `--log` option above) and syslog (see `--syslog` option above).\n\n#### I open the Arkiv log file with less, and it's full of strange characters\nUnlike `more` and `tail`, `less` doesn't interpret ANSI text formatting commands (bold, color, etc.) by default.  \nTo enable it, you have to use the option `-r` or `-R`.\n\n\n### 3.5 Database backup\n\n#### What kind of database backups are available?\nArkiv could generate two kinds of database backups:\n- SQL backups created using [`mysqldump`](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html).\n- Binary backups using [`xtrabackup`](https://www.percona.com/software/mysql-database/percona-xtrabackup).\n\nThere is two types of binary backups:\n- Full backups; the server's files are entirely copied.\n- Incremental backups; only the data modified since the last backup (full or incremental) are copied.\n\nYou must do a full backup before performing any incremental backup.\n\n#### Which databases and table engines could be backed up?\nIf you choose SQL backups (using `mysqldump`), Arkiv can manage any table engine supported by [MySQL](https://www.mysql.com/), [MariaDB](https://mariadb.org/) and [Percona Server](https://www.percona.com/software/mysql-database/percona-server).\n\nIf you choose binary backups (using `xtrabackup`), Arkiv can handle:\n- MySQL (5.1 and above) or MariaDB, with InnoDB, MyISAM and XtraDB tables.\n- Percona Server with XtraDB tables.\n\nNote that MyISAM tables can't be incrementally backed up. They are copied entirely each time an incremental backup is performed.\n\n#### Are binary backups prepared for restore?\nNo. Binary backups are done using `xtrabackup --backup`. The `xtrabackup --prepare` step is not done to save time and space. You will have to do it when you want to restore a database (see below).\n\n#### How to define a full binary backup once per day and an incremental backup every other hours?\nYou will have to create two different configuration files and add Arkiv in Crontab twice: once for the full backup (everyday at midnight for example), and once for the incremental backups (every hours except midnight).\n\nYou need both executions to use the same LSN file. It will be written by the full backup, and read and updated by each incremental backups.\n\nThe same process could be used with any other frequency (for example: full backups once a week and incremental backups every other days).\n\n#### How to restore a SQL backup?\nArkiv generates one SQL file per database. You have to extract the wanted file and process it in your database server:\n```shell\n# unxz /path/to/database_sql/database.sql.xz\n# mysql -u username -p \u003c /path/to/database_sql/database.sql\n```\n\n#### How to restore a full binary backup without subsequent incremental backups?\nTo restore the database, you first need to extract the data:\n```shell\n# tar xJf /path/to/database_data.tar.xz\nor\n# tar xjf /path/to/database_data.tar.bz2\nor\n# tar xzf /path/to/database_data.tar.gz\n```\n\nThen you must prepare the backup:\n```shell\n# xtrabackup --prepare --target-dir=/path/to/database_data\n```\n\nPlease note that the MySQL server must be shut down, and the 'datadir' directory (usually `/var/lib/mysql`) must be empty. On Ubuntu:\n```shell\n# service mysql stop\n# rm -rf /var/lib/mysql/*\n```\n\nThen you can restore the data:\n```shell\n# xtrabackup --copy-back --target-dir=/path/to/database_data\n```\n\nFiles' ownership must be given back to the MySQL user (usually `mysql`):\n```shell\n# chown -R mysql:mysql /var/lib/mysql\n```\n\nFinally you can restart the MySQL daemon:\n```shell\n# service mysql start\n```\n\n#### How to restore a full + incrementals binary backup?\nLet's say you have a full backup (located in `/full/database_data`) and three incremental backups (located in `/inc1/database_data`, `/inc2/database_data` and `/inc3/database_data`), and you have already extracted the backed up files (see previous answer).\n\nFirst, you must prepare the full backup with the additional `--apply-log-only` option:\n```shell\n# xtrabackup --prepare --apply-log-only --target-dir=/full/database_data\n```\n\nAnd then you prepare using all incremental backups in their creation order, **except the last one**:\n```shell\n# xtrabackup --prepare --apply-log-only --target-dir=/full/database_data --incremental-dir=/inc1/database_data\n# xtrabackup --prepare --apply-log-only --target-dir=/full/database_data --incremental-dir=/inc2/database_data\n```\n\nData preparation of the last incremental backup is done without the `--apply-log-only` option:\n```shell\n# xtrabackup --prepare --target-dir=/full/database_data --incremental-dir=/inc3/database_data\n```\n\nOnce every backups have been merged, the process is the same than for a full backup:\n```shell\n# service mysql stop\n# rm -rf /var/lib/mysql/*\n# xtrabackup --copy-back --target-dir=/path/to/database_data\n# chown -R mysql:mysql /var/lib/mysql\n# service mysql start\n```\n\n\n### 3.6 Crontab\n\n#### On simple mode (one backup per day, every day at midnight), how to set up Arkiv to be executed at another time than midnight?\nYou just have to edit the configuration file of the user's [Cron table](https://en.wikipedia.org/wiki/Cron):\n```shell\n# crontab -e\n```\n\n#### How to execute pre- and/or post-backup scripts?\nSee the previous answer. You just have to add these scripts before and/or after the Arkiv program in the Cron table.\n\n#### Is it possible to backup more often than every hours?\nNo, it's not possible.\n\n#### I want to have colors in the Arkiv log file when it's launched from Crontab, as well as when it's launch from the command line\nThe problem comes from the Crontab environment, which is very minimal.  \nYou have to set the `TERM` environment variable from the Crontab. It is also a good idea to define the `MAILTO` and `PATH` variables.\n\nEdit the Crontab:\n```shell\n# crontab -e\n```\n\nAnd add these three lines at its beginning:\n```shell\nTERM=xterm\nMAILTO=your.email@domain.com\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\n```\n\n#### How to receive an email alert when a problem occurs?\nAdd a `MAILTO` environment variable at the beginning of your Crontab. See the previous answer.\n\n\n### 3.7 Miscellaneous\n\n#### How to report bugs?\n[Arkiv issues tracker](https://github.com/Digicreon/Arkiv/issues)\n\n#### Why is Arkiv compatible only with Bash interpreter?\nBecause the `read` buitin command has a `-s` parameter for silent input (used for encryption passphrase and MySQL password input without showing them), unavailable on `dash` or `zsh` (for example).\n\n#### Arkiv looks like Backup-Manager\nYes indeed. Both of them wants to help people to backup files and databases, and archive data in a secure place.\n\nBut Arkiv is different in several ways:\n- It can manage hourly backups.\n- It can transfer data on Amazon Glacier for long-term archiving.\n- It can manage complex purge policies.\n- The configuration process is simpler (you answer to questions).\n- Written in pure shell, it doesn't need a Perl interpreter.\n\nOn the other hand, [Backup-Manager](https://github.com/sukria/Backup-Manager) is able to transfer to remote destinations by SCP or FTP, and to burn data on CD/DVD.\n\n","funding_links":[],"categories":["Backup Software"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FDigicreon%2FArkiv","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FDigicreon%2FArkiv","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FDigicreon%2FArkiv/lists"}