{"id":32136951,"url":"https://github.com/bash-unit/bash_unit","last_synced_at":"2026-04-01T20:34:27.692Z","repository":{"id":3454966,"uuid":"2040778","full_name":"bash-unit/bash_unit","owner":"bash-unit","description":"bash unit testing enterprise edition framework for professionals","archived":false,"fork":false,"pushed_at":"2026-02-11T12:41:58.000Z","size":1225,"stargazers_count":632,"open_issues_count":13,"forks_count":57,"subscribers_count":10,"default_branch":"main","last_synced_at":"2026-02-11T20:50:14.635Z","etag":null,"topics":["assertions","bash","tdd","test-driven-development","test-framework","testing","unit-testing","unittest","xunit"],"latest_commit_sha":null,"homepage":"","language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/bash-unit.png","metadata":{"files":{"readme":"README.adoc","changelog":null,"contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null},"funding":{"github":["pgrange"],"liberapay":"bash_unit"}},"created_at":"2011-07-13T08:09:53.000Z","updated_at":"2026-02-11T12:42:02.000Z","dependencies_parsed_at":"2024-06-10T09:18:43.573Z","dependency_job_id":"53120597-38d6-496f-9764-9e4cb91e0d2a","html_url":"https://github.com/bash-unit/bash_unit","commit_stats":{"total_commits":226,"total_committers":26,"mean_commits":8.692307692307692,"dds":0.3584070796460177,"last_synced_commit":"070746c1b54793ae2ad004ad26f0e213b1c1bdfc"},"previous_names":["bash-unit/bash_unit"],"tags_count":27,"template":false,"template_full_name":null,"purl":"pkg:github/bash-unit/bash_unit","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bash-unit%2Fbash_unit","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bash-unit%2Fbash_unit/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bash-unit%2Fbash_unit/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bash-unit%2Fbash_unit/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/bash-unit","download_url":"https://codeload.github.com/bash-unit/bash_unit/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bash-unit%2Fbash_unit/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31291702,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-01T13:12:26.723Z","status":"ssl_error","status_checked_at":"2026-04-01T13:12:25.102Z","response_time":53,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["assertions","bash","tdd","test-driven-development","test-framework","testing","unit-testing","unittest","xunit"],"created_at":"2025-10-21T04:53:15.483Z","updated_at":"2026-04-01T20:34:27.666Z","avatar_url":"https://github.com/bash-unit.png","language":"Shell","readme":"ifdef::backend-manpage[]\n= BASH_UNIT(1)\n\n== NAME\nendif::[]\n\nifndef::backend-manpage[]\nimage::img/bu_50.png[bash_unit]\nendif::[]\n\nbash_unit - bash unit testing enterprise edition framework for professionals!\n\n== Synopsis\n\n*bash_unit* [-f tap] [-p \u003cpattern\u003e] [-s \u003cpattern\u003e] [-r] [test_file]\n\n== Description\n\n*bash_unit* allows you to write unit tests (functions starting with *test*),\nrun them and, in case of failure, displays the stack trace\nwith source file and line number indications to locate the problem.\n\nNeed a quick start? The\nhttps://github.com/bash-unit/getting_started/[getting started project]\nwill help you get on track in no time.\n\nThe following functions are available in your tests (see below for detailed documentation):\n\n* `fail [message]`\n* `assert \u003cassertion\u003e [message]`\n* `assert_fail \u003cassertion\u003e [message]`\n* `assert_status_code \u003cexpected_status_code\u003e \u003cassertion\u003e [message]`\n* `assert_equals \u003cexpected\u003e \u003cactual\u003e [message]`\n* `assert_not_equals \u003cunexpected\u003e \u003cactual\u003e [message]`\n* `assert_matches \u003cexpected-regex\u003e \u003cactual\u003e [message]`\n* `assert_not_matches \u003cunexpected-regex\u003e \u003cactual\u003e [message]`\n* `assert_within_delta \u003cexpected num\u003e \u003cactual num\u003e \u003cmax delta\u003e [message]`\n* `assert_no_diff \u003cexpected\u003e \u003cactual\u003e [message]`\n* `skip_if \u003ccondition\u003e \u003cpattern\u003e`\n* `fake \u003ccommand\u003e [replacement code]`\n\nifndef::backend-manpage[]\nimage::img/demo.gif[demo]\nendif::[]\n\n_(by the way, the documentation you are reading is itself tested with bash-unit)_\n\n*bash_unit* is free software you may contribute to. See link:CONTRIBUTING.md[CONTRIBUTING.md].\n\n\n== Options\n\n*-p* _pattern_::\n  filters tests to run based on the given pattern.\n  You can specify several patterns by repeating this option\n  for each pattern.\n\n*-s* _pattern_::\n  skip tests which name matches the given pattern.\n  You can specify several patterns by repeating this option\n  for each pattern.\n  Tests will appear in *bash_unit* output as _skipped_.\n  (see also _skip_if_)\n\n*-r*::\n  executes test cases in random order.\n  Only affects the order within a test file (files are always\n  executed in the order in which they are specified on the\n  command line).\n\n*-f* _output_format_::\n  specify an alternative output format.\n  The only supported value is *tap*.\n\n*-q*::\n  quiet mode.\n  Will only output the status of each test with no further\n  information even in case of failure.\n\nifndef::backend-manpage[]\n\n== How to install *bash_unit*\n\n=== installing on Archlinux\n\n*bash_unit* package is available on Archlinux through AUR. In order to install, issue the following command :\n\n    yaourt -Sys bash_unit\n\n=== installing via link:https://nixos.org/[Nix/NixOS]\n\n*bash_unit* package has been added to link:https://github.com/nixos/nixpkgs[nixpkgs]. You can use it with the following command:\n\n    nix-shell -p bash_unit\n\n=== installing via link:https://brew.sh[Homebrew]\n\n*bash_unit* is available by invoking brew:\n\n    brew install bash_unit\n\n=== other installation\n\nThis will install *bash_unit* in your current working directory:\n\n    curl -s https://raw.githubusercontent.com/bash-unit/bash_unit/master/install.sh | bash\n\nYou can also download it from the https://github.com/bash-unit/bash_unit/releases[release page].\n\nendif::[]\n\n=== GitHub Actions\n\nHere is an example of how you could integrate *bash_unit* with https://docs.github.com/fr/actions[GitHub Actions]:\n\n```\nname: bash_unit tests\non:\n  push:\n    branches: [ main ]\n  pull_request:\n    branches: [ main ]\n\njobs:\n  ubuntu:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/checkout@v4\n    - name: Unit testing with bash_unit\n      run: |\n        curl -s https://raw.githubusercontent.com/bash-unit/bash_unit/master/install.sh | bash\n        FORCE_COLOR=true ./bash_unit tests/test_*\n```\n\nSee this bash_unit https://github.com/pgrange/bash_unit_getting_started[getting started github project] for a working example.\n\n=== GitLab CI\n\nHere is an example of how you could integrate *bash_unit* with https://docs.gitlab.com/ee/ci/[GitLab CI]:\n\n```\ntest:\n  image: debian\n  script:\n    - apt-get update\n    - apt-get install --no-install-recommends -y curl ca-certificates\n    - curl -s https://raw.githubusercontent.com/bash-unit/bash_unit/master/install.sh | bash\n    - FORCE_COLOR=true ./bash_unit tests/test_*\n```\n\nSee this bash_unit https://gitlab.com/pgrange/bash_unit_getting_started[getting started gitlab project] for a working example.\n\n=== pre-commit hook\n\nYou can run `+bash_unit+` as a https://pre-commit.com[pre-commit] hook.\n\nAdd the following to your pre-commit configuration. By default it will run scripts that are identified as shell scripts that match the path `+^tests/(.*/)?test_.*\\.sh$+`.\n\n[.pre-commit-config,yaml]\n```\nrepos:\n  - repo: https://github.com/bash-unit/bash_unit\n    rev: v2.2.0\n    hooks:\n      - id: bash-unit\n        always-run: true\n```\n\n== How to run tests\n\nTo run tests, simply call *bash_unit* with all your tests files as parameter. For instance to run some *bash_unit* tests, from *bash_unit* directory:\n\n```test\n./bash_unit tests/test_core.sh\n```\n\n```output\nRunning tests in tests/test_core.sh\n\tRunning test_assert_equals_fails_when_not_equal ... SUCCESS\n\tRunning test_assert_equals_succeed_when_equal ... SUCCESS\n\tRunning test_assert_fails ... SUCCESS\n\tRunning test_assert_fails_fails ... SUCCESS\n\tRunning test_assert_fails_succeeds ... SUCCESS\n\tRunning test_assert_matches_fails_when_not_matching ... SUCCESS\n\tRunning test_assert_matches_succeed_when_matching ... SUCCESS\n\tRunning test_assert_no_diff_fails_when_diff ... SUCCESS\n\tRunning test_assert_no_diff_succeeds_when_no_diff ... SUCCESS\n\tRunning test_assert_not_equals_fails_when_equal ... SUCCESS\n\tRunning test_assert_not_equals_succeeds_when_not_equal ... SUCCESS\n\tRunning test_assert_not_matches_fails_when_matching ... SUCCESS\n\tRunning test_assert_not_matches_succeed_when_not_matching ... SUCCESS\n\tRunning test_assert_shows_stderr_on_failure ... SUCCESS\n\tRunning test_assert_shows_stdout_on_failure ... SUCCESS\n\tRunning test_assert_status_code_fails ... SUCCESS\n\tRunning test_assert_status_code_succeeds ... SUCCESS\n\tRunning test_assert_succeeds ... SUCCESS\n\tRunning test_assert_within_delta_fails ... SUCCESS\n\tRunning test_assert_within_delta_succeeds ... SUCCESS\n\tRunning test_fail_fails ... SUCCESS\n\tRunning test_fail_prints_failure_message ... SUCCESS\n\tRunning test_fail_prints_where_is_error ... SUCCESS\n\tRunning test_fake_actually_fakes_the_command ... SUCCESS\n\tRunning test_fake_can_fake_inline ... SUCCESS\n\tRunning test_fake_echo_stdin_when_no_params ... SUCCESS\n\tRunning test_fake_exports_faked_in_subshells ... SUCCESS\n\tRunning test_fake_transmits_params_to_fake_code ... SUCCESS\n\tRunning test_fake_transmits_params_to_fake_code_as_array ... SUCCESS\n\tRunning test_should_pretty_format_even_when_LANG_is_unset ... SUCCESS\nOverall result: SUCCESS\n```\n\nYou might also want to run only specific tests, you may do so with the\n_-p_ option. This option accepts a pattern as parameter and filters test\nfunctions against this pattern.\n\n```test\n./bash_unit -p fail_fails -p assert tests/test_core.sh\n```\n\n```output\nRunning tests in tests/test_core.sh\n\tRunning test_assert_equals_fails_when_not_equal ... SUCCESS\n\tRunning test_assert_equals_succeed_when_equal ... SUCCESS\n\tRunning test_assert_fails ... SUCCESS\n\tRunning test_assert_fails_fails ... SUCCESS\n\tRunning test_assert_fails_succeeds ... SUCCESS\n\tRunning test_assert_matches_fails_when_not_matching ... SUCCESS\n\tRunning test_assert_matches_succeed_when_matching ... SUCCESS\n\tRunning test_assert_no_diff_fails_when_diff ... SUCCESS\n\tRunning test_assert_no_diff_succeeds_when_no_diff ... SUCCESS\n\tRunning test_assert_not_equals_fails_when_equal ... SUCCESS\n\tRunning test_assert_not_equals_succeeds_when_not_equal ... SUCCESS\n\tRunning test_assert_not_matches_fails_when_matching ... SUCCESS\n\tRunning test_assert_not_matches_succeed_when_not_matching ... SUCCESS\n\tRunning test_assert_shows_stderr_on_failure ... SUCCESS\n\tRunning test_assert_shows_stdout_on_failure ... SUCCESS\n\tRunning test_assert_status_code_fails ... SUCCESS\n\tRunning test_assert_status_code_succeeds ... SUCCESS\n\tRunning test_assert_succeeds ... SUCCESS\n\tRunning test_assert_within_delta_fails ... SUCCESS\n\tRunning test_assert_within_delta_succeeds ... SUCCESS\n\tRunning test_fail_fails ... SUCCESS\nOverall result: SUCCESS\n```\n\nYou can combine the _-p_ option with _-s_ to skip some of the tests. This option accepts a pattern\nas parameter and mark as skipped any test function which matches this pattern.\n\n```test\n./bash_unit -p fail_fails -p assert -s no -s status tests/test_core.sh\n```\n\n```output\nRunning tests in tests/test_core.sh\n\tRunning test_assert_equals_fails_when_not_equal ... SKIPPED\n\tRunning test_assert_matches_fails_when_not_matching ... SKIPPED\n\tRunning test_assert_no_diff_fails_when_diff ... SKIPPED\n\tRunning test_assert_no_diff_succeeds_when_no_diff ... SKIPPED\n\tRunning test_assert_not_equals_fails_when_equal ... SKIPPED\n\tRunning test_assert_not_equals_succeeds_when_not_equal ... SKIPPED\n\tRunning test_assert_not_matches_fails_when_matching ... SKIPPED\n\tRunning test_assert_not_matches_succeed_when_not_matching ... SKIPPED\n\tRunning test_assert_status_code_fails ... SKIPPED\n\tRunning test_assert_status_code_succeeds ... SKIPPED\n\tRunning test_assert_equals_succeed_when_equal ... SUCCESS\n\tRunning test_assert_fails ... SUCCESS\n\tRunning test_assert_fails_fails ... SUCCESS\n\tRunning test_assert_fails_succeeds ... SUCCESS\n\tRunning test_assert_matches_succeed_when_matching ... SUCCESS\n\tRunning test_assert_shows_stderr_on_failure ... SUCCESS\n\tRunning test_assert_shows_stdout_on_failure ... SUCCESS\n\tRunning test_assert_succeeds ... SUCCESS\n\tRunning test_assert_within_delta_fails ... SUCCESS\n\tRunning test_assert_within_delta_succeeds ... SUCCESS\n\tRunning test_fail_fails ... SUCCESS\nOverall result: SUCCESS\n```\n\n*bash_unit* supports the http://testanything.org/[Test Anything Protocol] so you can ask for a tap formatted\noutput with the _-f_ option.\n\n```test\n./bash_unit -f tap tests/test_core.sh\n```\n\n```output\n# Running tests in tests/test_core.sh\nok - test_assert_equals_fails_when_not_equal\nok - test_assert_equals_succeed_when_equal\nok - test_assert_fails\nok - test_assert_fails_fails\nok - test_assert_fails_succeeds\nok - test_assert_matches_fails_when_not_matching\nok - test_assert_matches_succeed_when_matching\nok - test_assert_no_diff_fails_when_diff\nok - test_assert_no_diff_succeeds_when_no_diff\nok - test_assert_not_equals_fails_when_equal\nok - test_assert_not_equals_succeeds_when_not_equal\nok - test_assert_not_matches_fails_when_matching\nok - test_assert_not_matches_succeed_when_not_matching\nok - test_assert_shows_stderr_on_failure\nok - test_assert_shows_stdout_on_failure\nok - test_assert_status_code_fails\nok - test_assert_status_code_succeeds\nok - test_assert_succeeds\nok - test_assert_within_delta_fails\nok - test_assert_within_delta_succeeds\nok - test_fail_fails\nok - test_fail_prints_failure_message\nok - test_fail_prints_where_is_error\nok - test_fake_actually_fakes_the_command\nok - test_fake_can_fake_inline\nok - test_fake_echo_stdin_when_no_params\nok - test_fake_exports_faked_in_subshells\nok - test_fake_transmits_params_to_fake_code\nok - test_fake_transmits_params_to_fake_code_as_array\nok - test_should_pretty_format_even_when_LANG_is_unset\n1..30\n```\n\n== How to write tests\n\nWrite your test functions in a file. The name of a test function has to start with *test*. Only functions starting with *test* will be tested.\n\nUse the *bash_unit* assertion functions in your test functions, see below.\n\nYou may write a *setup* function that will be executed before each test is run.\n\nYou may write a *teardown* function that will be executed after each test is run.\n\nYou may write a *setup_suite* function that will be executed only once before all the tests of your test file.\n\nYou may write a *teardown_suite* function that will be executed only once after all the tests of your test file.\n\nIf you write code outside of any bash function, this code will be executed once at test file loading time since\nyour file is a bash script and *bash_unit* sources it before running your tests. It is suggested to write a\n*setup_suite* function and avoid any code outside a bash function. you must not use any bash_unit assertion\nin setup_suite or use exit in setup_suite for teardown_suite to be run.\nSee https://github.com/bash-unit/bash_unit/issues/43[issue 43] for more details.\n\nIf you want to keep an eye on a test not yet implemented, prefix the name of the function by *todo* instead of test.\nTest to do are not executed and do not impact the global status of your test suite but are displayed in *bash_unit* output.\n\n*bash_unit* changes the current working directory to the one of the running test file. If you need to access files from your test code, for instance the script under test, use path relative to the test file.\n\nYou may need to change the behavior of some commands to create conditions for your code under test to behave as expected. The *fake* function may help you to do that, see below.\n\n== Test functions\n\n*bash_unit* supports several shell oriented assertion functions.\n\n=== *fail*\n\n    fail [message]\n\nFails the test and displays an optional message.\n\n```test\ntest_can_fail() {\n  fail \"this test failed on purpose\"\n}\n```\n\n```output\n\tRunning test_can_fail ... FAILURE\nthis test failed on purpose\ndoc:2:test_can_fail()\n```\n\n=== *assert*\n\n    assert \u003cassertion\u003e [message]\n\nEvaluates _assertion_ and fails if _assertion_ fails.\n\n_assertion_ fails if its evaluation returns a status code different from 0.\n\nIn case of failure, the standard output and error of the evaluated _assertion_ is displayed. The optional message is also displayed.\n\n```test\ntest_assert_fails() {\n  assert false \"this test failed, obviously\"\n}\ntest_assert_succeed() {\n  assert true\n}\n```\n\n```output\n\tRunning test_assert_fails ... FAILURE\nthis test failed, obviously\ndoc:2:test_assert_fails()\n\tRunning test_assert_succeed ... SUCCESS\n```\n\nBut you probably want to assert less obvious facts.\n\n```test\ncode() {\n  touch /tmp/the_file\n}\n\ntest_code_creates_the_file() {\n  code\n\n  assert \"test -e /tmp/the_file\"\n}\n\ntest_code_makes_the_file_executable() {\n  code\n\n  assert \"test -x /tmp/the_file\" \"/tmp/the_file should be executable\"\n}\n```\n\n```output\n\tRunning test_code_creates_the_file ... SUCCESS\n\tRunning test_code_makes_the_file_executable ... FAILURE\n/tmp/the_file should be executable\ndoc:14:test_code_makes_the_file_executable()\n```\n\nIt may also be fun to use assert to check for the expected content of a file.\n\n```test\ncode() {\n  echo 'not so cool' \u003e /tmp/the_file\n}\n\ntest_code_write_appropriate_content_in_the_file() {\n  code\n\n  assert \"diff \u003c(echo 'this is cool') /tmp/the_file\"\n}\n```\n\n```output\n\tRunning test_code_write_appropriate_content_in_the_file ... FAILURE\nout\u003e 1c1\nout\u003e \u003c this is cool\nout\u003e ---\nout\u003e \u003e not so cool\ndoc:8:test_code_write_appropriate_content_in_the_file()\n```\n\n=== *assert_fail*\n\n    assert_fail \u003cassertion\u003e [message]\n\nAsserts that _assertion_ fails. This is the opposite of *assert*.\n\n_assertion_ fails if its evaluation returns a status code different from 0.\n\nIf the evaluated expression does not fail, then *assert_fail* will fail and display the standard output and error of the evaluated _assertion_. The optional message is also displayed.\n\n```test\ncode() {\n  echo 'not so cool' \u003e /tmp/the_file\n}\n\ntest_code_does_not_write_cool_in_the_file() {\n  code\n\n  assert_fails \"grep cool /tmp/the_file\" \"should not write 'cool' in /tmp/the_file\"\n}\n\ntest_code_does_not_write_this_in_the_file() {\n  code\n\n  assert_fails \"grep this /tmp/the_file\" \"should not write 'this' in /tmp/the_file\"\n}\n```\n\n```output\n\tRunning test_code_does_not_write_cool_in_the_file ... FAILURE\nshould not write 'cool' in /tmp/the_file\nout\u003e not so cool\ndoc:8:test_code_does_not_write_cool_in_the_file()\n\tRunning test_code_does_not_write_this_in_the_file ... SUCCESS\n```\n\n=== *assert_status_code*\n\n    assert_status_code \u003cexpected_status_code\u003e \u003cassertion\u003e [message]\n\nChecks for a precise status code of the evaluation of _assertion_.\n\nIt may be useful if you want to distinguish between several error conditions in your code.\n\nIn case of failure, the standard output and error of the evaluated _assertion_ is displayed. The optional message is also displayed.\n\n```test\ncode() {\n  exit 23\n}\n\ntest_code_should_fail_with_code_25() {\n  assert_status_code 25 code\n}\n```\n\n```output\n\tRunning test_code_should_fail_with_code_25 ... FAILURE\n expected status code 25 but was 23\ndoc:6:test_code_should_fail_with_code_25()\n```\n\n=== *assert_equals*\n\n    assert_equals \u003cexpected\u003e \u003cactual\u003e [message]\n\nAsserts for equality of the two strings _expected_ and _actual_.\n\n```test\ntest_obvious_inequality_with_assert_equals(){\n  assert_equals \"a string\" \"another string\" \"a string should be another string\"\n}\ntest_obvious_equality_with_assert_equals(){\n  assert_equals a a\n}\n\n```\n\n```output\n\tRunning test_obvious_equality_with_assert_equals ... SUCCESS\n\tRunning test_obvious_inequality_with_assert_equals ... FAILURE\na string should be another string\n expected [a string] but was [another string]\ndoc:2:test_obvious_inequality_with_assert_equals()\n```\n\n=== *assert_not_equals*\n\n    assert_not_equals \u003cunexpected\u003e \u003cactual\u003e [message]\n\nAsserts for inequality of the two strings _unexpected_ and _actual_.\n\n```test\ntest_obvious_equality_with_assert_not_equals(){\n  assert_not_equals \"a string\" \"a string\" \"a string should be different from another string\"\n}\ntest_obvious_inequality_with_assert_not_equals(){\n  assert_not_equals a b\n}\n\n```\n\n```output\n\tRunning test_obvious_equality_with_assert_not_equals ... FAILURE\na string should be different from another string\n expected different value than [a string] but was the same\ndoc:2:test_obvious_equality_with_assert_not_equals()\n\tRunning test_obvious_inequality_with_assert_not_equals ... SUCCESS\n```\n\n=== *assert_matches*\n\n    assert_matches \u003cexpected-regex\u003e \u003cactual\u003e [message]\n\nAsserts that the string _actual_ matches the regex pattern _expected-regex_.\n\n```test\ntest_obvious_notmatching_with_assert_matches(){\n  assert_matches \"a str.*\" \"another string\" \"'another string' should not match 'a str.*'\"\n}\ntest_obvious_matching_with_assert_matches(){\n  assert_matches \"a[nN].t{0,1}.*r str.*\" \"another string\"\n}\n\n```\n\n```output\n\tRunning test_obvious_matching_with_assert_matches ... SUCCESS\n\tRunning test_obvious_notmatching_with_assert_matches ... FAILURE\n'another string' should not match 'a str.*'\n expected regex [a str.*] to match [another string]\ndoc:2:test_obvious_notmatching_with_assert_matches()\n```\n\n=== *assert_not_matches*\n\n    assert_not_matches \u003cunexpected-regex\u003e \u003cactual\u003e [message]\n\nAsserts that the string _actual_ does not match the regex pattern _unexpected-regex_.\n\n```test\ntest_obvious_matching_with_assert_not_matches(){\n  assert_not_matches \"a str.*\" \"a string\" \"'a string' should not match 'a str.*'\"\n}\ntest_obvious_notmatching_with_assert_not_matches(){\n  assert_not_matches \"a str.*\" \"another string\"\n}\n\n```\n\n```output\n\tRunning test_obvious_matching_with_assert_not_matches ... FAILURE\n'a string' should not match 'a str.*'\n expected regex [a str.*] should not match but matched [a string]\ndoc:2:test_obvious_matching_with_assert_not_matches()\n\tRunning test_obvious_notmatching_with_assert_not_matches ... SUCCESS\n```\n\n=== *assert_within_delta*\n\n    assert_within_delta \u003cexpected num\u003e \u003cactual num\u003e \u003cmax delta\u003e [message]\n\nAsserts that the expected num matches the actual num up to a given max delta.\nThis function only support integers.\nGiven an expectation of 5 and a delta of 2 this would match 3, 4, 5, 6, and 7:\n\n```test\ntest_matches_within_delta(){\n  assert_within_delta 5 3 2\n  assert_within_delta 5 4 2\n  assert_within_delta 5 5 2\n  assert_within_delta 5 6 2\n  assert_within_delta 5 7 2\n}\ntest_does_not_match_within_delta(){\n  assert_within_delta 5 2 2\n}\n\n```\n\n```output\n\tRunning test_does_not_match_within_delta ... FAILURE\n expected value [5] to match [2] with a maximum delta of [2]\ndoc:9:test_does_not_match_within_delta()\n\tRunning test_matches_within_delta ... SUCCESS\n```\n\n=== *assert_no_diff*\n\n    assert_no_diff \u003cexpected\u003e \u003cactual\u003e [message]\n\nAsserts that the content of the file _actual_ does not have any differences to the one _expected_.\n\n```test\ntest_obvious_notmatching_with_assert_no_diff(){\n  assert_no_diff \u003c(echo foo) \u003c(echo bar)\n}\ntest_obvious_matching_with_assert_assert_no_diff(){\n  assert_no_diff bash_unit bash_unit\n}\n\n```\n\n```output\n\tRunning test_obvious_matching_with_assert_assert_no_diff ... SUCCESS\n\tRunning test_obvious_notmatching_with_assert_no_diff ... FAILURE\n expected 'doc' to be identical to 'doc' but was different\nout\u003e 1c1\nout\u003e \u003c foo\nout\u003e ---\nout\u003e \u003e bar\ndoc:2:test_obvious_notmatching_with_assert_no_diff()\n```\n\n== *skip_if* function\n\n    skip_if \u003ccondition\u003e \u003cpattern\u003e\n\nIf _condition_ is true, will skip all the tests in the current file which match the given _pattern_.\n\nThis can be useful when one has tests that are dependent on system environment, for instance:\n\n```test\nskip_if \"uname | grep Darwin\" linux\nskip_if \"uname | grep Linux\" darwin\n\ntest_linux_proc_exists() {\n  assert \"ls /proc/\" \"there should exist /proc on Linux\"\n}\ntest_darwin_proc_does_not_exist() {\n  assert_fail \"ls /proc/\" \"there should not exist /proc on Darwin\"\n}\n```\n\nwill output, on a Linux system:\n\n```output\n\tRunning test_darwin_proc_does_not_exist ... SKIPPED\n\tRunning test_linux_proc_exists ... SUCCESS\n```\n\n== *fake* function\n\n    fake \u003ccommand\u003e [replacement code]\n\nFakes _command_ and replaces it with _replacement code_ (if code is specified) for the rest of the execution of your test. If no replacement code is specified, then it replaces command by one that echoes stdin of fake. This may be useful if you need to simulate an environment for you code under test.\n\nFor instance:\n\n```test\nfake ps echo hello world\nps\n```\n\nwill output:\n\n```output\nhello world\n```\n\nWe can do the same using _stdin_ of fake:\n\n```test\nfake ps \u003c\u003c EOF\nhello world\nEOF\nps\n```\n\n```output\nhello world\n```\n\nifndef::backend-manpage[]\nIt has been asked whether using *fake* results in creating actual fakes or stubs or mocks? or may be spies? or may be they are dummies?\nThe first answer to this question is: it depends. The second is: read this\nhttps://www.google.fr/search?tbm=isch\u0026q=fake%20mock%20stub[great and detailed literature] on this subject.\nendif::[]\n\n=== Using stdin\n\nHere is an example, parameterizing fake with its _stdin_ to test that code fails when some process does not run and succeeds otherwise:\n\n```test\ncode() {\n  ps a | grep apache\n}\n\ntest_code_succeeds_if_apache_runs() {\n  fake ps \u003c\u003cEOF\n  PID TTY          TIME CMD\n13525 pts/7    00:00:01 bash\n24162 pts/7    00:00:00 ps\n 8387 ?            0:00 /usr/sbin/apache2 -k start\nEOF\n\n  assert code \"code should succeed when apache is running\"\n}\n\ntest_code_fails_if_apache_does_not_run() {\n  fake ps \u003c\u003cEOF\n  PID TTY          TIME CMD\n13525 pts/7    00:00:01 bash\n24162 pts/7    00:00:00 ps\nEOF\n\n  assert_fails code \"code should fail when apache is not running\"\n}\n\n```\n\n```output\n\tRunning test_code_fails_if_apache_does_not_run ... SUCCESS\n\tRunning test_code_succeeds_if_apache_runs ... SUCCESS\n```\n\n=== Using a function\n\nIn a previous example, we faked _ps_ by specifying code inline:\n\n```test\nfake ps echo hello world\nps\n```\n\n```output\nhello world\n```\n\nIf you need to write more complex code to fake your command, you may abstract this code in a function:\n\n```test\n_ps() {\n  echo hello world\n}\nfake ps _ps\nps\n```\n\n```output\nhello world\n```\n\nBe careful however that your _ps function is not exported to sub-processes. It means that, depending on how your code under test works, _ps may not be defined in the context where ps will be called. For instance:\n\n```test\n_ps() {\n  echo hello world\n}\nfake ps _ps\n\nbash -c ps\n```\n\n```output\nenvironment: line 1: _ps: command not found\n```\n\nIt depends on your code under test but it is safer to just export functions needed by your fake so that they are available in sub-processes:\n\n```test\n_ps() {\n  echo hello world\n}\nexport -f _ps\nfake ps _ps\n\nbash -c ps\n```\n\n```output\nhello world\n```\n\n*fake* is also limited by the fact that it defines a _bash_ function to\noverride the actual command. In some context the command can not be\noverridden by a function. For instance if your code under test relies on _exec_ to launch _ps_, *fake* will have no effect.\n\n*fake* may also imply strange behaviors from bash_unit when you try to\nfake really basic stuff. bash_unit tries to be as much immune to this as\npossible but there are some limits. Especially and as surprising as it\nmight seem, bash allows creating functions named after builtin commands\nand bash_unit won't resist that kind of situation. So, for instance, do\nnot try to fake: `exit`; `local`; `trap`; `eval`; `export`; `if`; `then`; `else`; `fi`; `while`; `do`; `done`; `$`; `echo`; `[` (I know, this is not a builtin but don't).\n\n=== *fake* parameters\n\n*fake* stores parameters given to the fake in the global variable _FAKE_PARAMS_ so that you can use them inside your fake.\n\nIt may be useful if you need to adapt the behavior on the given parameters.\n\nIt can also help in asserting the values of these parameters ... but this may be quite tricky.\n\nFor instance, in our previous code that checks apache is running, we have an issue since our code does not use _ps_ with the appropriate parameters. So we will try to check that parameters given to ps are _ax_.\n\nTo do that, the first naive approach would be:\n\n```test\ncode() {\n  ps a | grep apache\n}\n\ntest_code_gives_ps_appropriate_parameters() {\n  _ps() {\n    cat \u003c\u003cEOF\n  PID TTY          TIME CMD\n13525 pts/7    00:00:01 bash\n24162 pts/7    00:00:00 ps\n 8387 ?            0:00 /usr/sbin/apache2 -k start\nEOF\n    assert_equals ax \"${FAKE_PARAMS[@]}\"\n  }\n  export -f _ps\n  fake ps _ps\n\n  code \u003e/dev/null\n}\n```\n\nThis test calls _code_, which calls _ps_, which is actually implemented by __ps_. Since _code_ does not use _ax_ but only _a_ as parameters, this test should fail. But ...\n\n```output\n\tRunning test_code_gives_ps_appropriate_parameters ... SUCCESS\n```\n\nThe problem here is that _ps_ fail (because of the failed *assert_equals* assertion). But _ps_ is piped with _grep_:\n\n```shell\ncode() {\n  ps a | grep apache\n}\n```\n\nWith bash, the result code of a pipeline equals the result code of the last command of the pipeline. The last command is _grep_ and since grep succeeds, the failure of __ps_ is lost and our test succeeds. We have only succeeded in messing with the test output, nothing more.\n\nAn alternative may be to activate bash _pipefail_ option but this may introduce unwanted side effects. We can also simply not output anything in __ps_ so that _grep_ fails:\n\n```shell\ncode() {\n  ps a | grep apache\n}\n\ntest_code_gives_ps_appropriate_parameters() {\n  _ps() {\n    assert_equals ax \"${FAKE_PARAMS[@]}\"\n  }\n  export -f _ps\n  fake ps _ps\n\n  code \u003e/dev/null\n}\n```\n\nThe problem here is that we use a trick to make the code under test fail but the\nfailure has nothing to do with the actual *assert_equals* failure. This is really\nbad, don't do that.\n\nMoreover, *assert_equals* output is captured by _ps_ and this just messes with the display of our test results:\n\n```shell\n\tRunning test_code_gives_ps_appropriate_parameters ...\n```\n\nThe only correct alternative is for the fake _ps_ to write _FAKE_PARAMS_ in a file descriptor\nso that your test can grab them after code execution and assert their value. For instance\nby writing to a file:\n\n```test\ncode() {\n  ps a | grep apache\n}\n\ntest_code_gives_ps_appropriate_parameters() {\n  _ps() {\n    echo ${FAKE_PARAMS[@]} \u003e /tmp/fake_params\n  }\n  export -f _ps\n  fake ps _ps\n\n  code || true\n\n  assert_equals ax \"$(head -n1 /tmp/fake_params)\"\n}\n\nsetup() {\n  rm -f /tmp/fake_params\n}\n```\n\nHere our fake writes to _/tmp/fake_. We delete this file in *setup* to be\nsure that we do not get inappropriate data from a previous test. We assert\nthat the first line of _/tmp/fake_ equals _ax_. Also, note that we know\nthat _code_ will fail and write this to ignore the error: `code || true`.\n\n\n```output\n\tRunning test_code_gives_ps_appropriate_parameters ... FAILURE\n expected [ax] but was [a]\ndoc:14:test_code_gives_ps_appropriate_parameters()\n```\n\nWe can also compact the fake definition:\n\n```test\ncode() {\n  ps a | grep apache\n}\n\ntest_code_gives_ps_appropriate_parameters() {\n  fake ps 'echo ${FAKE_PARAMS[@]} \u003e/tmp/fake_params'\n\n  code || true\n\n  assert_equals ax \"$(head -n1 /tmp/fake_params)\"\n}\n\nsetup() {\n  rm -f /tmp/fake_params\n}\n```\n\n```output\n\tRunning test_code_gives_ps_appropriate_parameters ... FAILURE\n expected [ax] but was [a]\ndoc:10:test_code_gives_ps_appropriate_parameters()\n```\n\nFinally, we can avoid the _/tmp/fake_params_ temporary file by using _coproc_:\n\n```test\ncode() {\n  ps a | grep apache\n}\n\ntest_get_data_from_fake() {\n  #Fasten you seat belt ...\n  coproc cat\n  exec {test_channel}\u003e\u0026${COPROC[1]}\n  fake ps 'echo ${FAKE_PARAMS[@]} \u003e\u0026$test_channel'\n\n  code || true\n\n  assert_equals ax \"$(head -n1 \u003c\u0026${COPROC[0]})\"\n}\n\n```\n\n```output\n\tRunning test_get_data_from_fake ... FAILURE\n expected [ax] but was [a]\ndoc:13:test_get_data_from_fake()\n```\n","funding_links":["https://github.com/sponsors/pgrange","https://liberapay.com/bash_unit"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbash-unit%2Fbash_unit","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbash-unit%2Fbash_unit","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbash-unit%2Fbash_unit/lists"}