{"id":15010560,"url":"https://github.com/nodejs/node-core-test","last_synced_at":"2025-05-15T10:00:23.611Z","repository":{"id":37866210,"uuid":"473352184","full_name":"nodejs/node-core-test","owner":"nodejs","description":"Node 18's node:test, as an npm package","archived":false,"fork":false,"pushed_at":"2024-12-21T10:06:39.000Z","size":521,"stargazers_count":96,"open_issues_count":8,"forks_count":10,"subscribers_count":10,"default_branch":"main","last_synced_at":"2025-05-12T20:35:31.194Z","etag":null,"topics":["nodejs","testing"],"latest_commit_sha":null,"homepage":"","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/nodejs.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2022-03-23T20:43:42.000Z","updated_at":"2025-03-03T10:32:11.000Z","dependencies_parsed_at":"2024-11-06T17:38:25.830Z","dependency_job_id":"70d63e98-552e-4990-9e44-cbbf6913a117","html_url":"https://github.com/nodejs/node-core-test","commit_stats":{"total_commits":134,"total_committers":19,"mean_commits":7.052631578947368,"dds":0.6567164179104478,"last_synced_commit":"414743a74ef26ba383ae6c4d87e5b806bf7dc831"},"previous_names":["juliangruber/node-core-test"],"tags_count":13,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nodejs%2Fnode-core-test","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nodejs%2Fnode-core-test/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nodejs%2Fnode-core-test/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/nodejs%2Fnode-core-test/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/nodejs","download_url":"https://codeload.github.com/nodejs/node-core-test/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254319715,"owners_count":22051072,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["nodejs","testing"],"created_at":"2024-09-24T19:34:43.232Z","updated_at":"2025-05-15T10:00:22.861Z","avatar_url":"https://github.com/nodejs.png","language":"JavaScript","readme":"# The `test` npm package\n\n[![CI](https://github.com/nodejs/node-core-test/actions/workflows/ci.yml/badge.svg)](https://github.com/nodejs/node-core-test/actions/workflows/ci.yml)\n\nThis is a user-land port of [`node:test`](https://nodejs.org/api/test.html),\nthe experimental test runner introduced in Node.js 18. This module makes it\navailable in Node.js 14 and later.\n\nMinimal dependencies, with full test suite.\n\nDifferences from the core implementation:\n\n- Doesn't hide its own stack frames.\n- Some features require the use of `--experimental-abortcontroller` CLI flag to\n  work on Node.js v14.x. It's recommended to pass\n  `NODE_OPTIONS='--experimental-abortcontroller --no-warnings'` in your env if\n  you are testing on v14.x.\n\n## Docs\n\n### Test runner\n\n\u003e Stability: 1 - Experimental\n\n\u003c!-- source_link=lib/test.js --\u003e\n\nThe `node:test` module facilitates the creation of JavaScript tests.\nTo access it:\n\n```mjs\nimport test from 'test'\n```\n\n```cjs\nconst test = require('test')\n```\n\nTests created via the `test` module consist of a single function that is\nprocessed in one of three ways:\n\n1. A synchronous function that is considered failing if it throws an exception,\n   and is considered passing otherwise.\n2. A function that returns a `Promise` that is considered failing if the\n   `Promise` rejects, and is considered passing if the `Promise` resolves.\n3. A function that receives a callback function. If the callback receives any\n   truthy value as its first argument, the test is considered failing. If a\n   falsy value is passed as the first argument to the callback, the test is\n   considered passing. If the test function receives a callback function and\n   also returns a `Promise`, the test will fail.\n\nThe following example illustrates how tests are written using the\n`test` module.\n\n```js\ntest('synchronous passing test', t =\u003e {\n  // This test passes because it does not throw an exception.\n  assert.strictEqual(1, 1)\n})\n\ntest('synchronous failing test', t =\u003e {\n  // This test fails because it throws an exception.\n  assert.strictEqual(1, 2)\n})\n\ntest('asynchronous passing test', async t =\u003e {\n  // This test passes because the Promise returned by the async\n  // function is not rejected.\n  assert.strictEqual(1, 1)\n})\n\ntest('asynchronous failing test', async t =\u003e {\n  // This test fails because the Promise returned by the async\n  // function is rejected.\n  assert.strictEqual(1, 2)\n})\n\ntest('failing test using Promises', t =\u003e {\n  // Promises can be used directly as well.\n  return new Promise((resolve, reject) =\u003e {\n    setImmediate(() =\u003e {\n      reject(new Error('this will cause the test to fail'))\n    })\n  })\n})\n\ntest('callback passing test', (t, done) =\u003e {\n  // done() is the callback function. When the setImmediate() runs, it invokes\n  // done() with no arguments.\n  setImmediate(done)\n})\n\ntest('callback failing test', (t, done) =\u003e {\n  // When the setImmediate() runs, done() is invoked with an Error object and\n  // the test fails.\n  setImmediate(() =\u003e {\n    done(new Error('callback failure'))\n  })\n})\n```\n\nIf any tests fail, the process exit code is set to `1`.\n\n#### Subtests\n\nThe test context's `test()` method allows subtests to be created. This method\nbehaves identically to the top level `test()` function. The following example\ndemonstrates the creation of a top level test with two subtests.\n\n```js\ntest('top level test', async t =\u003e {\n  await t.test('subtest 1', t =\u003e {\n    assert.strictEqual(1, 1)\n  })\n\n  await t.test('subtest 2', t =\u003e {\n    assert.strictEqual(2, 2)\n  })\n})\n```\n\nIn this example, `await` is used to ensure that both subtests have completed.\nThis is necessary because parent tests do not wait for their subtests to\ncomplete. Any subtests that are still outstanding when their parent finishes\nare cancelled and treated as failures. Any subtest failures cause the parent\ntest to fail.\n\n## Skipping tests\n\nIndividual tests can be skipped by passing the `skip` option to the test, or by\ncalling the test context's `skip()` method as shown in the\nfollowing example.\n\n```js\n// The skip option is used, but no message is provided.\ntest('skip option', { skip: true }, t =\u003e {\n  // This code is never executed.\n})\n\n// The skip option is used, and a message is provided.\ntest('skip option with message', { skip: 'this is skipped' }, t =\u003e {\n  // This code is never executed.\n})\n\ntest('skip() method', t =\u003e {\n  // Make sure to return here as well if the test contains additional logic.\n  t.skip()\n})\n\ntest('skip() method with message', t =\u003e {\n  // Make sure to return here as well if the test contains additional logic.\n  t.skip('this is skipped')\n})\n```\n\n## `describe`/`it` syntax\n\nRunning tests can also be done using `describe` to declare a suite\nand `it` to declare a test.\nA suite is used to organize and group related tests together.\n`it` is an alias for `test`, except there is no test context passed,\nsince nesting is done using suites.\n\n```js\ndescribe('A thing', () =\u003e {\n  it('should work', () =\u003e {\n    assert.strictEqual(1, 1);\n  });\n\n  it('should be ok', () =\u003e {\n    assert.strictEqual(2, 2);\n  });\n\n  describe('a nested thing', () =\u003e {\n    it('should work', () =\u003e {\n      assert.strictEqual(3, 3);\n    });\n  });\n});\n```\n\n`describe` and `it` are imported from the `test` module.\n\n```mjs\nimport { describe, it } from 'test';\n```\n\n```cjs\nconst { describe, it } = require('test');\n```\n\n### `only` tests\n\nIf `node--test` is started with the `--test-only` command-line option, it is\npossible to skip all top level tests except for a selected subset by passing\nthe `only` option to the tests that should be run. When a test with the `only`\noption set is run, all subtests are also run. The test context's `runOnly()`\nmethod can be used to implement the same behavior at the subtest level.\n\n```js\n// Assume node--test is run with the --test-only command-line option.\n// The 'only' option is set, so this test is run.\ntest('this test is run', { only: true }, async t =\u003e {\n  // Within this test, all subtests are run by default.\n  await t.test('running subtest')\n\n  // The test context can be updated to run subtests with the 'only' option.\n  t.runOnly(true)\n  await t.test('this subtest is now skipped')\n  await t.test('this subtest is run', { only: true })\n\n  // Switch the context back to execute all tests.\n  t.runOnly(false)\n  await t.test('this subtest is now run')\n\n  // Explicitly do not run these tests.\n  await t.test('skipped subtest 3', { only: false })\n  await t.test('skipped subtest 4', { skip: true })\n})\n\n// The 'only' option is not set, so this test is skipped.\ntest('this test is not run', () =\u003e {\n  // This code is not run.\n  throw new Error('fail')\n})\n```\n\n## Filtering tests by name\n\nThe [`--test-name-pattern`][] command-line option can be used to only run tests\nwhose name matches the provided pattern. Test name patterns are interpreted as\nJavaScript regular expressions. The `--test-name-pattern` option can be\nspecified multiple times in order to run nested tests. For each test that is\nexecuted, any corresponding test hooks, such as `beforeEach()`, are also\nrun.\n\nGiven the following test file, starting Node.js with the\n`--test-name-pattern=\"test [1-3]\"` option would cause the test runner to execute\n`test 1`, `test 2`, and `test 3`. If `test 1` did not match the test name\npattern, then its subtests would not execute, despite matching the pattern. The\nsame set of tests could also be executed by passing `--test-name-pattern`\nmultiple times (e.g. `--test-name-pattern=\"test 1\"`,\n`--test-name-pattern=\"test 2\"`, etc.).\n\n```js\ntest('test 1', async (t) =\u003e {\n  await t.test('test 2');\n  await t.test('test 3');\n});\ntest('Test 4', async (t) =\u003e {\n  await t.test('Test 5');\n  await t.test('test 6');\n});\n```\n\nTest name patterns can also be specified using regular expression literals. This\nallows regular expression flags to be used. In the previous example, starting\nNode.js with `--test-name-pattern=\"/test [4-5]/i\"` would match `Test 4` and\n`Test 5` because the pattern is case-insensitive.\n\nTest name patterns do not change the set of files that the test runner executes.\n\n## Extraneous asynchronous activity\n\nOnce a test function finishes executing, the results are reported as quickly\nas possible while maintaining the order of the tests. However, it is possible\nfor the test function to generate asynchronous activity that outlives the test\nitself. The test runner handles this type of activity, but does not delay the\nreporting of test results in order to accommodate it.\n\nIn the following example, a test completes with two `setImmediate()`\noperations still outstanding. The first `setImmediate()` attempts to create a\nnew subtest. Because the parent test has already finished and output its\nresults, the new subtest is immediately marked as failed, and reported later\nto the {TestsStream}.\n\nThe second `setImmediate()` creates an `uncaughtException` event.\n`uncaughtException` and `unhandledRejection` events originating from a completed\ntest are marked as failed by the `test` module and reported as diagnostic\nwarnings at the top level by the {TestsStream}.\n\n```js\ntest('a test that creates asynchronous activity', t =\u003e {\n  setImmediate(() =\u003e {\n    t.test('subtest that is created too late', t =\u003e {\n      throw new Error('error1')\n    })\n  })\n\n  setImmediate(() =\u003e {\n    throw new Error('error2')\n  })\n\n  // The test finishes after this line.\n})\n```\n\n## Running tests from the command line\n\nThe Node.js test runner can be invoked from the command line:\n\n```bash\nnode--test\n```\n\nBy default, Node.js will recursively search the current directory for\nJavaScript source files matching a specific naming convention. Matching files\nare executed as test files. More information on the expected test file naming\nconvention and behavior can be found in the [test runner execution model][]\nsection.\n\nAlternatively, one or more paths can be provided as the final argument(s) to\nthe Node.js command, as shown below.\n\n```bash\nnode--test test1.js test2.mjs custom_test_dir/\nnode--test test1.js test2.mjs custom_test_dir/\n```\n\nIn this example, the test runner will execute the files `test1.js` and\n`test2.mjs`. The test runner will also recursively search the\n`custom_test_dir/` directory for test files to execute.\n\n### Test runner execution model\n\nWhen searching for test files to execute, the test runner behaves as follows:\n\n- Any files explicitly provided by the user are executed.\n- If the user did not explicitly specify any paths, the current working\n  directory is recursively searched for files as specified in the following\n  steps.\n- `node_modules` directories are skipped unless explicitly provided by the\n  user.\n- If a directory named `test` is encountered, the test runner will search it\n  recursively for all all `.js`, `.cjs`, and `.mjs` files. All of these files\n  are treated as test files, and do not need to match the specific naming\n  convention detailed below. This is to accommodate projects that place all of\n  their tests in a single `test` directory.\n- In all other directories, `.js`, `.cjs`, and `.mjs` files matching the\n  following patterns are treated as test files:\n  - `^test$` - Files whose basename is the string `'test'`. Examples:\n    `test.js`, `test.cjs`, `test.mjs`.\n  - `^test-.+` - Files whose basename starts with the string `'test-'`\n    followed by one or more characters. Examples: `test-example.js`,\n    `test-another-example.mjs`.\n  - `.+[\\.\\-\\_]test$` - Files whose basename ends with `.test`, `-test`, or\n    `_test`, preceded by one or more characters. Examples: `example.test.js`,\n    `example-test.cjs`, `example_test.mjs`.\n  - Other file types understood by Node.js such as `.node` and `.json` are not\n    automatically executed by the test runner, but are supported if explicitly\n    provided on the command line.\n\nEach matching test file is executed in a separate child process. If the child\nprocess finishes with an exit code of 0, the test is considered passing.\nOtherwise, the test is considered to be a failure. Test files must be\nexecutable by Node.js, but are not required to use the `node:test` module\ninternally.\n\n## Mocking\n\nThe `node:test` module supports mocking during testing via a top-level `mock`\nobject. The following example creates a spy on a function that adds two numbers\ntogether. The spy is then used to assert that the function was called as\nexpected.\n\n```mjs\nimport assert from 'node:assert';\nimport { mock, test } from 'test';\ntest('spies on a function', () =\u003e {\n  const sum = mock.fn((a, b) =\u003e {\n    return a + b;\n  });\n  assert.strictEqual(sum.mock.calls.length, 0);\n  assert.strictEqual(sum(3, 4), 7);\n  assert.strictEqual(sum.mock.calls.length, 1);\n  const call = sum.mock.calls[0];\n  assert.deepStrictEqual(call.arguments, [3, 4]);\n  assert.strictEqual(call.result, 7);\n  assert.strictEqual(call.error, undefined);\n  // Reset the globally tracked mocks.\n  mock.reset();\n});\n```\n\n```cjs\n'use strict';\nconst assert = require('node:assert');\nconst { mock, test } = require('test');\ntest('spies on a function', () =\u003e {\n  const sum = mock.fn((a, b) =\u003e {\n    return a + b;\n  });\n  assert.strictEqual(sum.mock.calls.length, 0);\n  assert.strictEqual(sum(3, 4), 7);\n  assert.strictEqual(sum.mock.calls.length, 1);\n  const call = sum.mock.calls[0];\n  assert.deepStrictEqual(call.arguments, [3, 4]);\n  assert.strictEqual(call.result, 7);\n  assert.strictEqual(call.error, undefined);\n  // Reset the globally tracked mocks.\n  mock.reset();\n});\n```\n\nThe same mocking functionality is also exposed on the [`TestContext`][] object\nof each test. The following example creates a spy on an object method using the\nAPI exposed on the `TestContext`. The benefit of mocking via the test context is\nthat the test runner will automatically restore all mocked functionality once\nthe test finishes.\n\n```js\ntest('spies on an object method', (t) =\u003e {\n  const number = {\n    value: 5,\n    add(a) {\n      return this.value + a;\n    },\n  };\n  t.mock.method(number, 'add');\n  assert.strictEqual(number.add.mock.calls.length, 0);\n  assert.strictEqual(number.add(3), 8);\n  assert.strictEqual(number.add.mock.calls.length, 1);\n  const call = number.add.mock.calls[0];\n  assert.deepStrictEqual(call.arguments, [3]);\n  assert.strictEqual(call.result, 8);\n  assert.strictEqual(call.target, undefined);\n  assert.strictEqual(call.this, number);\n});\n```\n\n\n## Test reporters\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\nThe `node:test` module supports passing [`--test-reporter`][]\nflags for the test runner to use a specific reporter.\n\nThe following built-reporters are supported:\n\n* `tap`\n  The `tap` reporter is the default reporter used by the test runner. It outputs\n  the test results in the [TAP][] format.\n\n* `spec`\n  The `spec` reporter outputs the test results in a human-readable format.\n\n* `dot`\n  The `dot` reporter outputs the test results in a compact format,\n  where each passing test is represented by a `.`,\n  and each failing test is represented by a `X`.\n\n### Custom reporters\n\n[`--test-reporter`][] can be used to specify a path to custom reporter.\na custom reporter is a module that exports a value\naccepted by [stream.compose][].\nReporters should transform events emitted by a {TestsStream}\n\nExample of a custom reporter using {stream.Transform}:\n\n```mjs\nimport { Transform } from 'node:stream';\nconst customReporter = new Transform({\n  writableObjectMode: true,\n  transform(event, encoding, callback) {\n    switch (event.type) {\n      case 'test:start':\n        callback(null, `test ${event.data.name} started`);\n        break;\n      case 'test:pass':\n        callback(null, `test ${event.data.name} passed`);\n        break;\n      case 'test:fail':\n        callback(null, `test ${event.data.name} failed`);\n        break;\n      case 'test:plan':\n        callback(null, 'test plan');\n        break;\n      case 'test:diagnostic':\n        callback(null, event.data.message);\n        break;\n    }\n  },\n});\nexport default customReporter;\n```\n\n```cjs\nconst { Transform } = require('node:stream');\nconst customReporter = new Transform({\n  writableObjectMode: true,\n  transform(event, encoding, callback) {\n    switch (event.type) {\n      case 'test:start':\n        callback(null, `test ${event.data.name} started`);\n        break;\n      case 'test:pass':\n        callback(null, `test ${event.data.name} passed`);\n        break;\n      case 'test:fail':\n        callback(null, `test ${event.data.name} failed`);\n        break;\n      case 'test:plan':\n        callback(null, 'test plan');\n        break;\n      case 'test:diagnostic':\n        callback(null, event.data.message);\n        break;\n    }\n  },\n});\nmodule.exports = customReporter;\n```\n\nExample of a custom reporter using a generator function:\n\n```mjs\nexport default async function * customReporter(source) {\n  for await (const event of source) {\n    switch (event.type) {\n      case 'test:start':\n        yield `test ${event.data.name} started\\n`;\n        break;\n      case 'test:pass':\n        yield `test ${event.data.name} passed\\n`;\n        break;\n      case 'test:fail':\n        yield `test ${event.data.name} failed\\n`;\n        break;\n      case 'test:plan':\n        yield 'test plan';\n        break;\n      case 'test:diagnostic':\n        yield `${event.data.message}\\n`;\n        break;\n    }\n  }\n}\n```\n\n```cjs\nmodule.exports = async function * customReporter(source) {\n  for await (const event of source) {\n    switch (event.type) {\n      case 'test:start':\n        yield `test ${event.data.name} started\\n`;\n        break;\n      case 'test:pass':\n        yield `test ${event.data.name} passed\\n`;\n        break;\n      case 'test:fail':\n        yield `test ${event.data.name} failed\\n`;\n        break;\n      case 'test:plan':\n        yield 'test plan\\n';\n        break;\n      case 'test:diagnostic':\n        yield `${event.data.message}\\n`;\n        break;\n    }\n  }\n};\n```\n\nThe value provided to `--test-reporter` should be a string like one used in an\n`import()` in JavaScript code, or a value provided for [`--import`][].\n\n### Multiple reporters\n\nThe [`--test-reporter`][] flag can be specified multiple times to report test\nresults in several formats. In this situation\nit is required to specify a destination for each reporter\nusing [`--test-reporter-destination`][].\nDestination can be `stdout`, `stderr`, or a file path.\nReporters and destinations are paired according\nto the order they were specified.\n\nIn the following example, the `spec` reporter will output to `stdout`,\nand the `dot` reporter will output to `file.txt`:\n\n```bash\nnode --test-reporter=spec --test-reporter=dot --test-reporter-destination=stdout --test-reporter-destination=file.txt\n```\n\nWhen a single reporter is specified, the destination will default to `stdout`,\nunless a destination is explicitly provided.\n\n## `run([options])`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\n* `options` {Object} Configuration options for running tests. The following\n  properties are supported:\n  * `concurrency` {number|boolean} If a number is provided,\n    then that many files would run in parallel.\n    If `true`, it would run `os.availableParallelism() - 1` test files in\n    parallel.\n    If `false`, it would only run one test file at a time.\n    **Default:** `false`.\n  * `files`: {Array} An array containing the list of files to run.\n    **Default** matching files from [test runner execution model][].\n  * `signal` {AbortSignal} Allows aborting an in-progress test execution.\n  * `timeout` {number} A number of milliseconds the test execution will\n    fail after.\n    If unspecified, subtests inherit this value from their parent.\n    **Default:** `Infinity`.\n  * `inspectPort` {number|Function} Sets inspector port of test child process.\n    This can be a number, or a function that takes no arguments and returns a\n    number. If a nullish value is provided, each process gets its own port,\n    incremented from the primary's `process.debugPort`.\n    **Default:** `undefined`.\n    \n* Returns: {TestsStream}\n\n```js\nrun({ files: [path.resolve('./tests/test.js')] })\n  .pipe(process.stdout);\n```\n\n## `test([name][, options][, fn])`\n\n- `name` {string} The name of the test, which is displayed when reporting test\n  results. **Default:** The `name` property of `fn`, or `'\u003canonymous\u003e'` if `fn`\n  does not have a name.\n- `options` {Object} Configuration options for the test. The following\n  properties are supported:\n  - `concurrency` {number|boolean} If a number is provided,\n    then that many tests would run in parallel.\n    If `true`, it would run `os.availableParallelism() - 1` tests in parallel.\n    For subtests, it will be `Infinity` tests in parallel.\n    If `false`, it would only run one test at a time.\n    If unspecified, subtests inherit this value from their parent.\n    **Default:** `false`.\n  - `only` {boolean} If truthy, and the test context is configured to run\n    `only` tests, then this test will be run. Otherwise, the test is skipped.\n    **Default:** `false`.\n  - `signal` {AbortSignal} Allows aborting an in-progress test.\n  - `skip` {boolean|string} If truthy, the test is skipped. If a string is\n    provided, that string is displayed in the test results as the reason for\n    skipping the test. **Default:** `false`.\n  - `todo` {boolean|string} If truthy, the test marked as `TODO`. If a string\n    is provided, that string is displayed in the test results as the reason why\n    the test is `TODO`. **Default:** `false`.\n  - `timeout` {number} A number of milliseconds the test will fail after.\n    If unspecified, subtests inherit this value from their parent.\n    **Default:** `Infinity`.\n- `fn` {Function|AsyncFunction} The function under test. The first argument\n  to this function is a [`TestContext`][] object. If the test uses callbacks,\n  the callback function is passed as the second argument. **Default:** A no-op\n  function.\n- Returns: {Promise} Resolved with `undefined` once the test completes.\n\nThe `test()` function is the value imported from the `test` module. Each\ninvocation of this function results in reporting the test to the {TestsStream}.\n\nThe `TestContext` object passed to the `fn` argument can be used to perform\nactions related to the current test. Examples include skipping the test, adding\nadditional diagnostic information, or creating subtests.\n\n`test()` returns a `Promise` that resolves once the test completes. The return\nvalue can usually be discarded for top level tests. However, the return value\nfrom subtests should be used to prevent the parent test from finishing first\nand cancelling the subtest as shown in the following example.\n\n```js\ntest('top level test', async t =\u003e {\n  // The setTimeout() in the following subtest would cause it to outlive its\n  // parent test if 'await' is removed on the next line. Once the parent test\n  // completes, it will cancel any outstanding subtests.\n  await t.test('longer running subtest', async t =\u003e {\n    return new Promise((resolve, reject) =\u003e {\n      setTimeout(resolve, 1000)\n    })\n  })\n})\n```\n\nThe `timeout` option can be used to fail the test if it takes longer than\n`timeout` milliseconds to complete. However, it is not a reliable mechanism for\ncanceling tests because a running test might block the application thread and\nthus prevent the scheduled cancellation.\n\n## `describe([name][, options][, fn])`\n\n* `name` {string} The name of the suite, which is displayed when reporting test\n  results. **Default:** The `name` property of `fn`, or `'\u003canonymous\u003e'` if `fn`\n  does not have a name.\n* `options` {Object} Configuration options for the suite.\n  supports the same options as `test([name][, options][, fn])`.\n* `fn` {Function|AsyncFunction} The function under suite\n  declaring all subtests and subsuites.\n  The first argument to this function is a [`SuiteContext`][] object.\n  **Default:** A no-op function.\n* Returns: `undefined`.\n\nThe `describe()` function imported from the `test` module. Each\ninvocation of this function results in the creation of a Subtest.\nAfter invocation of top level `describe` functions,\nall top level tests and suites will execute.\n\n## `describe.skip([name][, options][, fn])`\n\nShorthand for skipping a suite, same as [`describe([name], { skip: true }[, fn])`][describe options].\n\n## `describe.todo([name][, options][, fn])`\n\nShorthand for marking a suite as `TODO`, same as\n[`describe([name], { todo: true }[, fn])`][describe options].\n\n## `it([name][, options][, fn])`\n\n* `name` {string} The name of the test, which is displayed when reporting test\n  results. **Default:** The `name` property of `fn`, or `'\u003canonymous\u003e'` if `fn`\n  does not have a name.\n* `options` {Object} Configuration options for the suite.\n  supports the same options as `test([name][, options][, fn])`.\n* `fn` {Function|AsyncFunction} The function under test.\n  If the test uses callbacks, the callback function is passed as an argument.\n  **Default:** A no-op function.\n* Returns: `undefined`.\n\nThe `it()` function is the value imported from the `test` module.\n\n## `it.skip([name][, options][, fn])`\n\nShorthand for skipping a test,\nsame as [`it([name], { skip: true }[, fn])`][it options].\n\n## `it.todo([name][, options][, fn])`\n\nShorthand for marking a test as `TODO`,\nsame as [`it([name], { todo: true }[, fn])`][it options].\n\n### `before([, fn][, options])`\n\n* `fn` {Function|AsyncFunction} The hook function.\n  If the hook uses callbacks,\n  the callback function is passed as the second argument. **Default:** A no-op\n  function.\n* `options` {Object} Configuration options for the hook. The following\n  properties are supported:\n  * `signal` {AbortSignal} Allows aborting an in-progress hook.\n  * `timeout` {number} A number of milliseconds the hook will fail after.\n    If unspecified, subtests inherit this value from their parent.\n    **Default:** `Infinity`.\n\nThis function is used to create a hook running before running a suite.\n\n```js\ndescribe('tests', async () =\u003e {\n  before(() =\u003e console.log('about to run some test'));\n  it('is a subtest', () =\u003e {\n    assert.ok('some relevant assertion here');\n  });\n});\n```\n\n### `after([, fn][, options])`\n\n* `fn` {Function|AsyncFunction} The hook function.\n  If the hook uses callbacks,\n  the callback function is passed as the second argument. **Default:** A no-op\n  function.\n* `options` {Object} Configuration options for the hook. The following\n  properties are supported:\n  * `signal` {AbortSignal} Allows aborting an in-progress hook.\n  * `timeout` {number} A number of milliseconds the hook will fail after.\n    If unspecified, subtests inherit this value from their parent.\n    **Default:** `Infinity`.\n\nThis function is used to create a hook running after  running a suite.\n\n```js\ndescribe('tests', async () =\u003e {\n  after(() =\u003e console.log('finished running tests'));\n  it('is a subtest', () =\u003e {\n    assert.ok('some relevant assertion here');\n  });\n});\n```\n\n### `beforeEach([, fn][, options])`\n\n* `fn` {Function|AsyncFunction} The hook function.\n  If the hook uses callbacks,\n  the callback function is passed as the second argument. **Default:** A no-op\n  function.\n* `options` {Object} Configuration options for the hook. The following\n  properties are supported:\n  * `signal` {AbortSignal} Allows aborting an in-progress hook.\n  * `timeout` {number} A number of milliseconds the hook will fail after.\n    If unspecified, subtests inherit this value from their parent.\n    **Default:** `Infinity`.\n\nThis function is used to create a hook running\nbefore each subtest of the current suite.\n\n```js\ndescribe('tests', async () =\u003e {\n  beforeEach(() =\u003e t.diagnostics('about to run a test'));\n  it('is a subtest', () =\u003e {\n    assert.ok('some relevant assertion here');\n  });\n});\n```\n\n### `afterEach([, fn][, options])`\n\n* `fn` {Function|AsyncFunction} The hook function.\n  If the hook uses callbacks,\n  the callback function is passed as the second argument. **Default:** A no-op\n  function.\n* `options` {Object} Configuration options for the hook. The following\n  properties are supported:\n  * `signal` {AbortSignal} Allows aborting an in-progress hook.\n  * `timeout` {number} A number of milliseconds the hook will fail after.\n    If unspecified, subtests inherit this value from their parent.\n    **Default:** `Infinity`.\n\nThis function is used to create a hook running\nafter each subtest of the current test.\n\n```js\ndescribe('tests', async () =\u003e {\n  afterEach(() =\u003e t.diagnostics('about to run a test'));\n  it('is a subtest', () =\u003e {\n    assert.ok('some relevant assertion here');\n  });\n});\n```\n\n## Class: `MockFunctionContext`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\nThe `MockFunctionContext` class is used to inspect or manipulate the behavior of\nmocks created via the [`MockTracker`][] APIs.\n\n### `ctx.calls`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\n* {Array}\n\nA getter that returns a copy of the internal array used to track calls to the\nmock. Each entry in the array is an object with the following properties.\n\n* `arguments` {Array} An array of the arguments passed to the mock function.\n* `error` {any} If the mocked function threw then this property contains the\n  thrown value. **Default:** `undefined`.\n* `result` {any} The value returned by the mocked function.\n* `stack` {Error} An `Error` object whose stack can be used to determine the\n  callsite of the mocked function invocation.\n* `target` {Function|undefined} If the mocked function is a constructor, this\n  field contains the class being constructed. Otherwise this will be\n  `undefined`.\n* `this` {any} The mocked function's `this` value.\n\n### `ctx.callCount()`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\n* Returns: {integer} The number of times that this mock has been invoked.\n\nThis function returns the number of times that this mock has been invoked. This\nfunction is more efficient than checking `ctx.calls.length` because `ctx.calls`\nis a getter that creates a copy of the internal call tracking array.\n\n### `ctx.mockImplementation(implementation)`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\n* `implementation` {Function|AsyncFunction} The function to be used as the\n  mock's new implementation.\n\nThis function is used to change the behavior of an existing mock.\n\nThe following example creates a mock function using `t.mock.fn()`, calls the\nmock function, and then changes the mock implementation to a different function.\n\n```js\ntest('changes a mock behavior', (t) =\u003e {\n  let cnt = 0;\n  function addOne() {\n    cnt++;\n    return cnt;\n  }\n  function addTwo() {\n    cnt += 2;\n    return cnt;\n  }\n  const fn = t.mock.fn(addOne);\n  assert.strictEqual(fn(), 1);\n  fn.mock.mockImplementation(addTwo);\n  assert.strictEqual(fn(), 3);\n  assert.strictEqual(fn(), 5);\n});\n```\n\n### `ctx.mockImplementationOnce(implementation[, onCall])`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\n* `implementation` {Function|AsyncFunction} The function to be used as the\n  mock's implementation for the invocation number specified by `onCall`.\n* `onCall` {integer} The invocation number that will use `implementation`. If\n  the specified invocation has already occurred then an exception is thrown.\n  **Default:** The number of the next invocation.\n\nThis function is used to change the behavior of an existing mock for a single\ninvocation. Once invocation `onCall` has occurred, the mock will revert to\nwhatever behavior it would have used had `mockImplementationOnce()` not been\ncalled.\n\nThe following example creates a mock function using `t.mock.fn()`, calls the\nmock function, changes the mock implementation to a different function for the\nnext invocation, and then resumes its previous behavior.\n\n```js\ntest('changes a mock behavior once', (t) =\u003e {\n  let cnt = 0;\n  function addOne() {\n    cnt++;\n    return cnt;\n  }\n  function addTwo() {\n    cnt += 2;\n    return cnt;\n  }\n  const fn = t.mock.fn(addOne);\n  assert.strictEqual(fn(), 1);\n  fn.mock.mockImplementationOnce(addTwo);\n  assert.strictEqual(fn(), 3);\n  assert.strictEqual(fn(), 4);\n});\n```\n\n### `ctx.restore()`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\nResets the implementation of the mock function to its original behavior. The\nmock can still be used after calling this function.\n\n## Class: `MockTracker`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\nThe `MockTracker` class is used to manage mocking functionality. The test runner\nmodule provides a top level `mock` export which is a `MockTracker` instance.\nEach test also provides its own `MockTracker` instance via the test context's\n`mock` property.\n\n### `mock.fn([original[, implementation]][, options])`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\n* `original` {Function|AsyncFunction} An optional function to create a mock on.\n  **Default:** A no-op function.\n* `implementation` {Function|AsyncFunction} An optional function used as the\n  mock implementation for `original`. This is useful for creating mocks that\n  exhibit one behavior for a specified number of calls and then restore the\n  behavior of `original`. **Default:** The function specified by `original`.\n* `options` {Object} Optional configuration options for the mock function. The\n  following properties are supported:\n  * `times` {integer} The number of times that the mock will use the behavior of\n    `implementation`. Once the mock function has been called `times` times, it\n    will automatically restore the behavior of `original`. This value must be an\n    integer greater than zero. **Default:** `Infinity`.\n* Returns: {Proxy} The mocked function. The mocked function contains a special\n  `mock` property, which is an instance of [`MockFunctionContext`][], and can\n  be used for inspecting and changing the behavior of the mocked function.\n\nThis function is used to create a mock function.\n\nThe following example creates a mock function that increments a counter by one\non each invocation. The `times` option is used to modify the mock behavior such\nthat the first two invocations add two to the counter instead of one.\n\n```js\ntest('mocks a counting function', (t) =\u003e {\n  let cnt = 0;\n  function addOne() {\n    cnt++;\n    return cnt;\n  }\n  function addTwo() {\n    cnt += 2;\n    return cnt;\n  }\n  const fn = t.mock.fn(addOne, addTwo, { times: 2 });\n  assert.strictEqual(fn(), 2);\n  assert.strictEqual(fn(), 4);\n  assert.strictEqual(fn(), 5);\n  assert.strictEqual(fn(), 6);\n});\n```\n\n### `mock.getter(object, methodName[, implementation][, options])`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\nThis function is syntax sugar for [`MockTracker.method`][] with `options.getter`\nset to `true`.\n\n### `mock.method(object, methodName[, implementation][, options])`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\n* `object` {Object} The object whose method is being mocked.\n* `methodName` {string|symbol} The identifier of the method on `object` to mock.\n  If `object[methodName]` is not a function, an error is thrown.\n* `implementation` {Function|AsyncFunction} An optional function used as the\n  mock implementation for `object[methodName]`. **Default:** The original method\n  specified by `object[methodName]`.\n* `options` {Object} Optional configuration options for the mock method. The\n  following properties are supported:\n  * `getter` {boolean} If `true`, `object[methodName]` is treated as a getter.\n    This option cannot be used with the `setter` option. **Default:** false.\n  * `setter` {boolean} If `true`, `object[methodName]` is treated as a setter.\n    This option cannot be used with the `getter` option. **Default:** false.\n  * `times` {integer} The number of times that the mock will use the behavior of\n    `implementation`. Once the mocked method has been called `times` times, it\n    will automatically restore the original behavior. This value must be an\n    integer greater than zero. **Default:** `Infinity`.\n* Returns: {Proxy} The mocked method. The mocked method contains a special\n  `mock` property, which is an instance of [`MockFunctionContext`][], and can\n  be used for inspecting and changing the behavior of the mocked method.\n\nThis function is used to create a mock on an existing object method. The\nfollowing example demonstrates how a mock is created on an existing object\nmethod.\n\n```js\ntest('spies on an object method', (t) =\u003e {\n  const number = {\n    value: 5,\n    subtract(a) {\n      return this.value - a;\n    },\n  };\n  t.mock.method(number, 'subtract');\n  assert.strictEqual(number.subtract.mock.calls.length, 0);\n  assert.strictEqual(number.subtract(3), 2);\n  assert.strictEqual(number.subtract.mock.calls.length, 1);\n  const call = number.subtract.mock.calls[0];\n  assert.deepStrictEqual(call.arguments, [3]);\n  assert.strictEqual(call.result, 2);\n  assert.strictEqual(call.error, undefined);\n  assert.strictEqual(call.target, undefined);\n  assert.strictEqual(call.this, number);\n});\n```\n\n### `mock.reset()`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\nThis function restores the default behavior of all mocks that were previously\ncreated by this `MockTracker` and disassociates the mocks from the\n`MockTracker` instance. Once disassociated, the mocks can still be used, but the\n`MockTracker` instance can no longer be used to reset their behavior or\notherwise interact with them.\n\nAfter each test completes, this function is called on the test context's\n`MockTracker`. If the global `MockTracker` is used extensively, calling this\nfunction manually is recommended.\n\n### `mock.restoreAll()`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\nThis function restores the default behavior of all mocks that were previously\ncreated by this `MockTracker`. Unlike `mock.reset()`, `mock.restoreAll()` does\nnot disassociate the mocks from the `MockTracker` instance.\n\n### `mock.setter(object, methodName[, implementation][, options])`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\nThis function is syntax sugar for [`MockTracker.method`][] with `options.setter`\nset to `true`.\n\n## Class: `TestsStream`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\n* Extends {ReadableStream}\n\nA successful call to [`run()`][] method will return a new {TestsStream}\nobject, streaming a series of events representing the execution of the tests.\n`TestsStream` will emit events, in the order of the tests definition\n\n### Event: `'test:diagnostic'`\n\n* `data` {Object}\n  * `file` {string|undefined} The path of the test file,\n    undefined if test is not ran through a file.\n  * `message` {string} The diagnostic message.\n  * `nesting` {number} The nesting level of the test.\n\nEmitted when [`context.diagnostic`][] is called.\n\n### Event: `'test:fail'`\n\n* `data` {Object}\n  * `details` {Object} Additional execution metadata.\n    * `duration` {number} The duration of the test in milliseconds.\n    * `error` {Error} The error thrown by the test.\n  * `file` {string|undefined} The path of the test file,\n    undefined if test is not ran through a file.\n  * `name` {string} The test name.\n  * `nesting` {number} The nesting level of the test.\n  * `testNumber` {number} The ordinal number of the test.\n  * `todo` {string|boolean|undefined} Present if [`context.todo`][] is called\n  * `skip` {string|boolean|undefined} Present if [`context.skip`][] is called\n\nEmitted when a test fails.\n\n### Event: `'test:pass'`\n\n* `data` {Object}\n  * `details` {Object} Additional execution metadata.\n    * `duration` {number} The duration of the test in milliseconds.\n  * `file` {string|undefined} The path of the test file,\n    undefined if test is not ran through a file.\n  * `name` {string} The test name.\n  * `nesting` {number} The nesting level of the test.\n  * `testNumber` {number} The ordinal number of the test.\n  * `todo` {string|boolean|undefined} Present if [`context.todo`][] is called\n  * `skip` {string|boolean|undefined} Present if [`context.skip`][] is called\n\nEmitted when a test passes.\n\n### Event: `'test:plan'`\n\n* `data` {Object}\n  * `file` {string|undefined} The path of the test file,\n    undefined if test is not ran through a file.\n  * `nesting` {number} The nesting level of the test.\n  * `count` {number} The number of subtests that have ran.\n\nEmitted when all subtests have completed for a given test.\n\n### Event: `'test:start'`\n\n* `data` {Object}\n  * `file` {string|undefined} The path of the test file,\n    undefined if test is not ran through a file.\n  * `name` {string} The test name.\n  * `nesting` {number} The nesting level of the test.\n\nEmitted when a test starts.\n\n## Class: `TestContext`\n\nAn instance of `TestContext` is passed to each test function in order to\ninteract with the test runner. However, the `TestContext` constructor is not\nexposed as part of the API.\n\n### `context.beforeEach([, fn][, options])`\n\n* `fn` {Function|AsyncFunction} The hook function. The first argument\n  to this function is a [`TestContext`][] object. If the hook uses callbacks,\n  the callback function is passed as the second argument. **Default:** A no-op\n  function.\n* `options` {Object} Configuration options for the hook. The following\n  properties are supported:\n  * `signal` {AbortSignal} Allows aborting an in-progress hook.\n  * `timeout` {number} A number of milliseconds the hook will fail after.\n    If unspecified, subtests inherit this value from their parent.\n    **Default:** `Infinity`.\n\nThis function is used to create a hook running\nbefore each subtest of the current test.\n\n```js\ntest('top level test', async (t) =\u003e {\n  t.beforeEach((t) =\u003e t.diagnostics(`about to run ${t.name}`));\n  await t.test(\n    'This is a subtest',\n    (t) =\u003e {\n      assert.ok('some relevant assertion here');\n    }\n  );\n});\n```\n\n### `context.after([fn][, options])`\n\n\u003c!-- YAML\nadded: REPLACEME\n--\u003e\n\n* `fn` {Function|AsyncFunction} The hook function. The first argument\n  to this function is a [`TestContext`][] object. If the hook uses callbacks,\n  the callback function is passed as the second argument. **Default:** A no-op\n  function.\n* `options` {Object} Configuration options for the hook. The following\n  properties are supported:\n  * `signal` {AbortSignal} Allows aborting an in-progress hook.\n  * `timeout` {number} A number of milliseconds the hook will fail after.\n    If unspecified, subtests inherit this value from their parent.\n    **Default:** `Infinity`.\n\nThis function is used to create a hook that runs after the current test\nfinishes.\n\n```js\ntest('top level test', async (t) =\u003e {\n  t.after((t) =\u003e t.diagnostic(`finished running ${t.name}`));\n  assert.ok('some relevant assertion here');\n});\n```\n\n### `context.afterEach([, fn][, options])`\n\n* `fn` {Function|AsyncFunction} The hook function. The first argument\n  to this function is a [`TestContext`][] object. If the hook uses callbacks,\n  the callback function is passed as the second argument. **Default:** A no-op\n  function.\n* `options` {Object} Configuration options for the hook. The following\n  properties are supported:\n  * `signal` {AbortSignal} Allows aborting an in-progress hook.\n  * `timeout` {number} A number of milliseconds the hook will fail after.\n    If unspecified, subtests inherit this value from their parent.\n    **Default:** `Infinity`.\n\nThis function is used to create a hook running\nafter each subtest of the current test.\n\n```js\ntest('top level test', async (t) =\u003e {\n  t.afterEach((t) =\u003e t.diagnostics(`finished running ${t.name}`));\n  await t.test(\n    'This is a subtest',\n    (t) =\u003e {\n      assert.ok('some relevant assertion here');\n    }\n  );\n});\n```\n\n### `context.diagnostic(message)`\n\n- `message` {string}Message to be reported.\n\nThis function is used to write diagnostics to the output. Any diagnostic\ninformation is included at the end of the test's results. This function does\nnot return a value.\n\n `context.name`\n\nThe name of the test.\n\n### `context.runOnly(shouldRunOnlyTests)`\n\n- `shouldRunOnlyTests` {boolean} Whether or not to run `only` tests.\n\nIf `shouldRunOnlyTests` is truthy, the test context will only run tests that\nhave the `only` option set. Otherwise, all tests are run. If Node.js was not\nstarted with the [`--test-only`][] command-line option, this function is a\nno-op.\n\n### `context.signal`\n\n* [`AbortSignal`][] Can be used to abort test subtasks when the test has been aborted.\n\n\u003e **Warning**\n\u003e On Node.js v14.x, this feature won't be available unless you pass the\n\u003e `--experimental-abortcontroller` CLI flag or added an external global polyfill\n\u003e for `AbortController`.\n\n```js\ntest('top level test', async (t) =\u003e {\n  await fetch('some/uri', { signal: t.signal });\n});\n```\n\n### `context.skip([message])`\n\n* `message` {string} Optional skip message.\n\nThis function causes the test's output to indicate the test as skipped. If\n`message` is provided, it is included in the output. Calling `skip()` does\nnot terminate execution of the test function. This function does not return a\nvalue.\n\n### `context.todo([message])`\n\n* `message` {string} Optional `TODO` message.\n\nThis function adds a `TODO` directive to the test's output. If `message` is\nprovided, it is included in the output. Calling `todo()` does not terminate\nexecution of the test function. This function does not return a value.\n\n### `context.test([name][, options][, fn])`\n\n- `name` {string} The name of the subtest, which is displayed when reporting\n  test results. **Default:** The `name` property of `fn`, or `'\u003canonymous\u003e'` if\n  `fn` does not have a name.\n- `options` {Object} Configuration options for the subtest. The following\n  properties are supported:\n  - `concurrency` {number|boolean|null} If a number is provided,\n    then that many tests would run in parallel.\n    If `true`, it would run all subtests in parallel.\n    If `false`, it would only run one test at a time.\n    If unspecified, subtests inherit this value from their parent.\n    **Default:** `null`.\n  - `only` {boolean} If truthy, and the test context is configured to run\n    `only` tests, then this test will be run. Otherwise, the test is skipped.\n    **Default:** `false`.\n  - `skip` {boolean|string} If truthy, the test is skipped. If a string is\n    provided, that string is displayed in the test results as the reason for\n    skipping the test. **Default:** `false`.\n  - `signal` {AbortSignal} Allows aborting an in-progress test.\n  - `todo` {boolean|string} If truthy, the test marked as `TODO`. If a string\n    is provided, that string is displayed in the test results as the reason why\n    the test is `TODO`. **Default:** `false`.\n  - `timeout` {number} A number of milliseconds the test will fail after.\n    If unspecified, subtests inherit this value from their parent.\n    **Default:** `Infinity`.\n- `fn` {Function|AsyncFunction} The function under test. The first argument\n  to this function is a [`TestContext`][] object. If the test uses callbacks,\n  the callback function is passed as the second argument. **Default:** A no-op\n  function.\n- Returns: {Promise} Resolved with `undefined` once the test completes.\n\nThis function is used to create subtests under the current test. This function\nbehaves in the same fashion as the top level [`test()`][] function.\n\n## Class: `SuiteContext`\n\nAn instance of `SuiteContext` is passed to each suite function in order to\ninteract with the test runner. However, the `SuiteContext` constructor is not\nexposed as part of the API.\n\n### `context.name`\n\nThe name of the suite.\n\n### `context.signal`\n\n* [`AbortSignal`][] Can be used to abort test subtasks when the test has been aborted.\n\n\u003e **Warning**\n\u003e On Node.js v14.x, this feature won't be available unless you pass the\n\u003e `--experimental-abortcontroller` CLI flag or added an external global polyfill\n\u003e for `AbortController`.\n\n\n[`AbortSignal`]: https://developer.mozilla.org/en-US/docs/Web/API/AbortSignal\n[TAP]: https://testanything.org/\n[`MockFunctionContext`]: #class-mockfunctioncontext\n[`MockTracker.method`]: #mockmethodobject-methodname-implementation-options\n[`MockTracker`]: #class-mocktracke\n[`SuiteContext`]: #class-suitecontext\n[`TestContext`]: #class-testcontext\n[`context.diagnostic`]: #contextdiagnosticmessage\n[`context.skip`]: #contextskipmessage\n[`context.todo`]: #contexttodomessage\n[`run()`]: #runoptions\n[`test()`]: #testname-options-fn\n[describe options]: #describename-options-fn\n[it options]: #testname-options-fn\n[test runner execution model]: #test-runner-execution-model\n\n## License\n\n[MIT](./LICENSE)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnodejs%2Fnode-core-test","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnodejs%2Fnode-core-test","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnodejs%2Fnode-core-test/lists"}