{"id":13771851,"url":"https://github.com/okken/pytest-check","last_synced_at":"2026-03-03T01:03:06.902Z","repository":{"id":26415400,"uuid":"108791429","full_name":"okken/pytest-check","owner":"okken","description":"A pytest plugin that allows multiple failures per test.","archived":false,"fork":false,"pushed_at":"2026-02-25T06:29:54.000Z","size":303,"stargazers_count":402,"open_issues_count":6,"forks_count":41,"subscribers_count":6,"default_branch":"main","last_synced_at":"2026-02-25T10:45:12.652Z","etag":null,"topics":["assertion-library","pytest","pytest-plugin"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/okken.png","metadata":{"files":{"readme":"README.md","changelog":"changelog.md","contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null},"funding":{"github":"okken"}},"created_at":"2017-10-30T02:22:27.000Z","updated_at":"2026-02-25T06:23:10.000Z","dependencies_parsed_at":"2024-01-08T15:13:10.740Z","dependency_job_id":"8abb28f1-aa8a-4ea4-a5ac-04aa2327fb6a","html_url":"https://github.com/okken/pytest-check","commit_stats":{"total_commits":171,"total_committers":23,"mean_commits":7.434782608695652,"dds":0.3450292397660819,"last_synced_commit":"badf3684b0fb0e5741fa61a62c536c49b9e37064"},"previous_names":[],"tags_count":41,"template":false,"template_full_name":null,"purl":"pkg:github/okken/pytest-check","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/okken%2Fpytest-check","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/okken%2Fpytest-check/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/okken%2Fpytest-check/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/okken%2Fpytest-check/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/okken","download_url":"https://codeload.github.com/okken/pytest-check/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/okken%2Fpytest-check/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29920307,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-27T19:37:42.220Z","status":"ssl_error","status_checked_at":"2026-02-27T19:37:41.463Z","response_time":57,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["assertion-library","pytest","pytest-plugin"],"created_at":"2024-08-03T17:00:56.466Z","updated_at":"2026-03-03T01:03:06.875Z","avatar_url":"https://github.com/okken.png","language":"Python","readme":"# pytest-check\n\nA pytest plugin that allows multiple failures per test.\n\n----\n\nNormally, a test function will fail and stop running with the first failed `assert`.\nThat's totally fine for tons of kinds of software tests.\nHowever, there are times where you'd like to check more than one thing, and you'd really like to know the results of each check, even if one of them fails.\n\n`pytest-check` allows multiple failed \"checks\" per test function, so you can see the whole picture of what's going wrong.\n\n## Installation\n\nFrom PyPI:\n\n```\n$ pip install pytest-check\n```\n\nFrom conda (conda-forge):\n```\n$ conda install -c conda-forge pytest-check\n```\n\n## Example\n\nQuick example of where you might want multiple checks:\n\n```python\nimport httpx\nfrom pytest_check import check\n\ndef test_httpx_get():\n    r = httpx.get('https://www.example.org/')\n    # bail if bad status code\n    assert r.status_code == 200\n    # but if we get to here\n    # then check everything else without stopping\n    with check:\n        assert r.is_redirect is False\n    with check:\n        assert r.encoding == 'utf-8'\n    with check:\n        assert 'Example Domain' in r.text\n```\n\n## Import vs fixture\n\nThe example above used import: `from pytest_check import check`.\n\nYou can also grab `check` as a fixture with no import:\n\n```python\ndef test_httpx_get(check):\n    r = httpx.get('https://www.example.org/')\n    ...\n    with check:\n        assert r.is_redirect == False\n    ...\n```\n\n## Validation functions\n\n`check` also helper functions for common checks. \nThese methods do NOT need to be inside of a `with check:` block.\n\n| Function    | Meaning    | Notes    |\n|------------------------------------------------------|-----------------------------------|------------------------------------------------------------------------------------------------------|\n| `equal(a, b, msg=\"\")`    | `a == b`    |    |\n| `not_equal(a, b, msg=\"\")`    | `a != b`    |    |\n| `is_(a, b, msg=\"\")`    | `a is b`    |    |\n| `is_not(a, b, msg=\"\")`    | `a is not b`    |    |\n| `is_true(x, msg=\"\")`    | `bool(x) is True`    |    |\n| `is_false(x, msg=\"\")`    | `bool(x) is False`    |    |\n| `is_none(x, msg=\"\")`    | `x is None`    |    |\n| `is_not_none(x, msg=\"\")`    | `x is not None`    |    |\n| `is_in(a, b, msg=\"\")`    | `a in b`    |    |\n| `is_not_in(a, b, msg=\"\")`    | `a not in b`    |    |\n| `is_instance(a, b, msg=\"\")`    | `isinstance(a, b)`    |    |\n| `is_not_instance(a, b, msg=\"\")`    | `not isinstance(a, b)`    |    |\n| `is_nan(x, msg=\"\")`    | `math.isnan(x)`    | [math.isnan](https://docs.python.org/3/library/math.html#math.isnan)   |\n| `is_not_nan(x, msg=\"\")`    | `not math.isnan(x) `    | [math.isnan](https://docs.python.org/3/library/math.html#math.isnan)   | \n| `almost_equal(a, b, rel=None, abs=None, msg=\"\")`    | `a == pytest.approx(b, rel, abs)` | [pytest.approx](https://docs.pytest.org/en/latest/reference.html#pytest-approx)    |\n| `not_almost_equal(a, b, rel=None, abs=None, msg=\"\")` | `a != pytest.approx(b, rel, abs)` | [pytest.approx](https://docs.pytest.org/en/latest/reference.html#pytest-approx)    | \n| `greater(a, b, msg=\"\")`    | `a \u003e b`    |    |\n| `greater_equal(a, b, msg=\"\")`    | `a \u003e= b`    |    |\n| `less(a, b, msg=\"\")`    | `a \u003c b`    |    |\n| `less_equal(a, b, msg=\"\")`    | `a \u003c= b`    |    |\n| `between(b, a, c, msg=\"\", ge=False, le=False)`    | `a \u003c b \u003c c`    |    |\n| `between_equal(b, a, c, msg=\"\")`    | `a \u003c= b \u003c= c`    | same as `between(b, a, c, msg, ge=True, le=True)`    |\n| `raises(expected_exception, *args, **kwargs)`    | *Raises given exception*    | similar to [pytest.raises](https://docs.pytest.org/en/latest/reference/reference.html#pytest-raises) | \n| `fail(msg)`    | *Log a failure*    |    |\n\n**Note: This is a list of relatively common logic operators. I'm reluctant to add to the list too much, as it's easy to add your own.**\n\n\nThe httpx example can be rewritten with helper functions:\n\n```python\ndef test_httpx_get_with_helpers():\n    r = httpx.get('https://www.example.org/')\n    assert r.status_code == 200\n    check.is_false(r.is_redirect)\n    check.equal(r.encoding, 'utf-8')\n    check.is_in('Example Domain', r.text)\n```\n\nWhich you use is personal preference.\n\n## Defining your own check functions\n\n### Using `@check.check_func`\n\nThe `@check.check_func` decorator allows you to wrap any test helper that has an assert statement in it to be a non-blocking assert function.\n\n\n```python\nfrom pytest_check import check\n\n@check.check_func\ndef is_four(a):\n    assert a == 4\n\ndef test_all_four():\n    is_four(1)\n    is_four(2)\n    is_four(3)\n    is_four(4)\n```\n\n### Built in check functions return bool\n\nThe return value of all the check functions that come pre-written in pytest-check return a bool value.  \nYou can use that to determine if it passes or fails. \n\nSo, if you want to perform some action based on success/failure, you can just do something like:\n\n```python\nfrom pytest_check import check\n\ndef test_something()\n    ...\n    if check.equal(a, b):\n        # they are equal\n        ...\n    else\n        # they are not equal\n        # and a failure was registered by the check method\n        ...\n```\n\n### Using `check.fail()`\n\nUsing `@check.check_func` is probably the easiest. \nHowever, it does have a bit of overhead in the passing cases \nthat can affect large loops of checks.\n\nIf you need a bit of a speedup, use the following style with the help of `check.fail()`.\n\n```python\nfrom pytest_check import check\n\ndef is_four(a):\n    __tracebackhide__ = True\n    if a == 4:\n        return True\n    else: \n        check.fail(f\"check {a} == 4\")\n        return False\n\ndef test_all_four():\n  is_four(1)\n  is_four(2)\n  is_four(3)\n  is_four(4)\n```\n\n## Using raises as a context manager\n\n`raises` is used as context manager, much like `pytest.raises`. The main difference being that a failure to raise the right exception won't stop the execution of the test method.\n\n\n```python\nfrom pytest_check import check\n\ndef test_raises():\n    with check.raises(AssertionError):\n        x = 3\n        assert 1 \u003c x \u003c 4\n\ndef test_raises_exception_value():\n    with check.raises(ValueError) as e:\n        raise ValueError(\"This is a ValueError\")\n    check.equal(str(e.value) == \"This is a ValueError\")\n```\n\nIf the exception isn't correct, as in it isn't the exception type raised, the error message reported for the test failure will describe the actual exception.\n\n```python\ndef test_raises_fail():\n    with check.raises(ValueError):\n        x = 1 / 0 # division by zero error, NOT ValueError\n        assert x == 0\n```\n\nIf you want a custom message instead, you can supply one.   \nNote, this doesn't check that msg matches the exception string.  \nIt simply is a custom failure message for your test. \n\n```python\ndef test_raises_and_custom_fail_message():\n    with check.raises(ValueError, msg=\"custom\"):\n        x = 1 / 0 # division by zero error, NOT ValueError\n        assert x == 0\n```\n\n\n## Pseudo-tracebacks\n\nWith `check`, tests can have multiple failures per test.\nThis would possibly make for extensive output if we include the full traceback for\nevery failure.\nTo make the output a little more concise, `pytest-check` implements a shorter version, which we call pseudo-tracebacks. \nAnd to further make the output more concise, and speed up the test run, only the first pseudo-traceback is turned on by default.\n\nFor example, take this test:\n\n```python\ndef test_example():\n    a = 1\n    b = 2\n    c = [2, 4, 6]\n    check.greater(a, b)\n    check.less_equal(b, a)\n    check.is_in(a, c, \"Is 1 in the list\")\n    check.is_not_in(b, c, \"make sure 2 isn't in list\")\n```\n\nThis will result in:\n\n```\n$ pytest test_check.py                 \n...\n================================= FAILURES =================================\n_______________________________ test_example _______________________________\n\nFAILURE: check 1 \u003e 2\ntest_check.py:7 in test_example() -\u003e check.greater(a, b)\n\nFAILURE: check 2 \u003c= 1\nFAILURE: check 1 in [2, 4, 6]: Is 1 in the list\nFAILURE: check 2 not in [2, 4, 6]: make sure 2 isn't in list\n------------------------------------------------------------\nFailed Checks: 4\n========================= short test summary info ==========================\nFAILED test_check.py::test_example - check 1 \u003e 2\n============================ 1 failed in 0.01s =============================\n```\n\n\nIf you wish to see more pseudo-tracebacks than just the first, you can set `--check-max-tb=5` or something larger:\n\n\n```\n(.venv) $ pytest test_check.py --check-max-tb=5\n=========================== test session starts ============================\ncollected 1 item                                                           \n\ntest_check.py F                                                      [100%]\n\n================================= FAILURES =================================\n_______________________________ test_example _______________________________\n\nFAILURE: check 1 \u003e 2\ntest_check.py:7 in test_example() -\u003e check.greater(a, b)\n\nFAILURE: check 2 \u003c= 1\ntest_check.py:8 in test_example() -\u003e check.less_equal(b, a)\n\nFAILURE: check 1 in [2, 4, 6]: Is 1 in the list\ntest_check.py:9 in test_example() -\u003e check.is_in(a, c, \"Is 1 in the list\")\n\nFAILURE: check 2 not in [2, 4, 6]: make sure 2 isn't in list\ntest_check.py:10 in test_example() -\u003e check.is_not_in(b, c, \"make sure 2 isn't in list\")\n\n------------------------------------------------------------\nFailed Checks: 4\n========================= short test summary info ==========================\nFAILED test_check.py::test_example - check 1 \u003e 2\n============================ 1 failed in 0.01s =============================\n```\n\n## Red output\n\nThe failures will also be red, unless you turn that off with pytests `--color=no`.\n\n## No output\n\nYou can turn off the failure reports with pytests `--tb=no`.\n\n## Stop on Fail (maxfail behavior)\n\nSetting `-x` or `--maxfail=1` will cause this plugin to abort testing after the first failed check.\n\nSetting `-maxfail=2` or greater will turn off any handling of maxfail within this plugin and the behavior is controlled by pytest.\n\nIn other words, the `maxfail` count is counting tests, not checks.\nThe exception is the case of `1`, where we want to stop on the very first failed check.\n\n## any_failures()\n\nUse `any_failures()` to see if there are any failures.  \nOne use case is to make a block of checks conditional on not failing in a previous set of checks:\n\n```python\nfrom pytest_check import check\n\ndef test_with_groups_of_checks():\n    # always check these\n    check.equal(1, 1)\n    check.equal(2, 3)\n    if not check.any_failures():\n        # only check these if the above passed\n        check.equal(1, 2)\n        check.equal(2, 2)\n```\n\n## Speedups\n\nIf you have lots of check failures, your tests may not run as fast as you want.\nThere are a few ways to speed things up.\n\n* `--check-max-tb=5` - Only first 5 failures per test will include pseudo-tracebacks (rest without them).\n    * The example shows `5` but any number can be used.\n    * pytest-check uses custom traceback code I'm calling a pseudo-traceback.\n    * This is visually shorter than normal assert tracebacks.\n    * Internally, it uses introspection, which can be slow.\n    * Allowing a limited number of pseudo-tracebacks speeds things up quite a bit.\n    * Default is 1. \n        * Set a large number, e.g: 1000, if you want pseudo-tracebacks for all failures\n\n* `--check-max-report=10` - limit reported failures per test.\n    * The example shows `10` but any number can be used.\n    * The test will still have the total nuber of failures reported.\n    * Default is no maximum.\n\n* `--check-max-fail=20` - Stop the test after this many check failures.\n    * This is useful if your code under test is slow-ish and you want to bail early.\n    * Default is no maximum.\n\n* Any of these can be used on their own, or combined.\n\n* Recommendation:\n    * Leave the default, equivelant to `--check-max-tb=1`.\n    * If excessive output is annoying, set `--check-max-report=10` or some tolerable number.\n\n## Local speedups\n\nThe flags above are global settings, and apply to every test in the test run.  \n\nLocally, you can set these values per test.\n\nFrom `examples/test_example_speedup_funcs.py`:\n\n```python\ndef test_max_tb():\n    check.set_max_tb(2)\n    for i in range(1, 11):\n        check.equal(i, 100)\n\ndef test_max_report():\n    check.set_max_report(5)\n    for i in range(1, 11):\n        check.equal(i, 100)\n\ndef test_max_fail():\n    check.set_max_fail(5)\n    for i in range(1, 11):\n        check.equal(i, 100)\n```\n\n## Call on fail\n\nIf you want to have a custom action happen for every check failure,\nyou can use the method `call_on_fail(func)`.  \nYou have to pass it a function that accepts a string.  \nThat function will be called with the message from the check failure. \n\nExample:\n\n```python\nfrom pytest_check import check\n\ndef my_func(msg):\n    ...\n\ncheck.call_on_fail(my_func)\n...\n\n```\n\nThere are other uses for this, but the original use case idea was for logging to a file.\n\n### Logging to a file\n\nThe `examples/logging_to_a_file/` directory has an example of how you could set this up. \n\nIn a `conftest.py` file, we:\n\n* Configure a logging logger to write to a file.\n* Create a small `log_failure(message)` function that uses that logger\n* Call `check.call_on_fail(log_failure)` to register the function.\n\nAnd that's it, all failures will get logged to a file.\n\n```python\nimport logging\nimport pytest\nfrom pytest_check import check\n\n@pytest.fixture(scope='session', autouse=True)\ndef setup_logging():\n    # logging config\n    log = logging.getLogger(__name__)\n    log.setLevel(logging.DEBUG)\n    fh = logging.FileHandler('session.log')\n    fh.setLevel(logging.DEBUG)\n    fh.setFormatter(logging.Formatter('--- %(asctime)s.%(msecs)03d ---\\n%(message)s', \n                                      datefmt='%Y-%m-%d %H:%M:%S'))\n    log.addHandler(fh)\n    # log start of tests\n    log.info(\"---------\\nStarting test run\\n---------\")\n    # have check failures log to file\n    def log_failure(message):\n        log.error(message)\n    check.call_on_fail(log_failure)\n```\n\nWith that setup, the file will end up looking something like this:\n\n```\n$ cat session.log              \n--- 2026-02-26 09:46:39.822 ---\n---------\nStarting test run\n---------\n--- 2026-02-26 09:46:39.831 ---\nFAILURE: check 1 == 2\ntest_log2.py:4 in test_one() -\u003e check.equal(1, 2)\n\n--- 2026-02-26 09:46:39.832 ---\nFAILURE: check 5 == 6\ntest_log2.py:7 in test_two() -\u003e check.equal(5, 6)\n```\n\nOf course, you can modify the logging config to make it look however you want.\n\n\n## Contributing\n\nContributions are very welcome. Tests can be run with [tox](https://tox.readthedocs.io/en/latest/).\nTest coverage is now 100%. Please make sure to keep it at 100%.\nIf you have an awesome pull request and need help with getting coverage back up, let me know.\n\n\n## License\n\nDistributed under the terms of the [MIT](http://opensource.org/licenses/MIT) license, \"pytest-check\" is free and open source software\n\n## Issues\n\nIf you encounter any problems, please [file an issue](https://github.com/okken/pytest-check/issues) along with a detailed description.\n\n## Changelog\n\nSee [changelog.md](https://github.com/okken/pytest-check/blob/main/changelog.md)\n\n","funding_links":["https://github.com/sponsors/okken"],"categories":["Plugins","Retrying Tests"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fokken%2Fpytest-check","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fokken%2Fpytest-check","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fokken%2Fpytest-check/lists"}