{"id":13469555,"url":"https://github.com/callstack/reassure","last_synced_at":"2025-05-13T16:12:12.933Z","repository":{"id":41309725,"uuid":"463068634","full_name":"callstack/reassure","owner":"callstack","description":"Performance testing companion for React and React Native","archived":false,"fork":false,"pushed_at":"2025-04-08T02:00:58.000Z","size":9343,"stargazers_count":1341,"open_issues_count":17,"forks_count":32,"subscribers_count":21,"default_branch":"main","last_synced_at":"2025-04-09T19:05:06.050Z","etag":null,"topics":["hacktoberfest","performance","performance-testing","react","react-native","regression","testing"],"latest_commit_sha":null,"homepage":"https://callstack.github.io/reassure/","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/callstack.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-02-24T08:34:03.000Z","updated_at":"2025-04-09T05:40:45.000Z","dependencies_parsed_at":"2023-02-15T07:01:51.204Z","dependency_job_id":"3a639539-ee9a-4ac2-bec8-970a59e1c5df","html_url":"https://github.com/callstack/reassure","commit_stats":null,"previous_names":[],"tags_count":182,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/callstack%2Freassure","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/callstack%2Freassure/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/callstack%2Freassure/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/callstack%2Freassure/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/callstack","download_url":"https://codeload.github.com/callstack/reassure/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250514767,"owners_count":21443208,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["hacktoberfest","performance","performance-testing","react","react-native","regression","testing"],"created_at":"2024-07-31T15:01:44.745Z","updated_at":"2025-04-23T21:02:10.784Z","avatar_url":"https://github.com/callstack.png","language":"TypeScript","readme":"\u003cp align=\"center\"\u003e\n  \u003cpicture\u003e\n    \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/callstack/reassure/raw/main/packages/reassure/docs/logo-dark.png\"\u003e\n    \u003cimg src=\"https://github.com/callstack/reassure/raw/main/packages/reassure/docs/logo.png\" width=\"400px\" alt=\"Reassure\" /\u003e\n  \u003c/picture\u003e\n\u003c/p\u003e\n\u003cp align=\"center\"\u003ePerformance testing companion for React and React Native.\u003c/p\u003e\n\u003cp align=\"center\"\u003e\n  \u003cpicture\u003e\n    \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/callstack/reassure/raw/main/packages/reassure/docs/callstack-x-entain-dark.png\"\u003e\n    \u003cimg src=\"https://github.com/callstack/reassure/raw/main/packages/reassure/docs/callstack-x-entain.png\" width=\"327px\" alt=\"Callstack x Entain\" /\u003e\n  \u003c/picture\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://callstack.github.io/reassure/\"\u003e\u003cb\u003eRead The Docs\u003c/b\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n---\n\n- [The problem](#the-problem)\n- [This solution](#this-solution)\n- [Installation and setup](#installation-and-setup)\n  - [Writing your first test](#writing-your-first-test)\n    - [Writing async tests](#writing-async-tests)\n  - [Measuring test performance](#measuring-test-performance)\n  - [Write performance testing script](#write-performance-testing-script)\n- [CI setup](#ci-setup)\n  - [Scaffolding](#scaffolding)\n    - [CI Script (`reassure-tests.sh`)](#ci-script-reassure-testssh)\n    - [Dangerfile](#dangerfile)\n    - [`.gitignore`](#gitignore)\n  - [CI script (`reassure-tests.sh`)](#ci-script-reassure-testssh-1)\n  - [Integration](#integration)\n    - [Updating existing Dangerfile](#updating-existing-dangerfile)\n    - [Creating Dangerfile](#creating-dangerfile)\n    - [Updating the CI configuration file](#updating-the-ci-configuration-file)\n- [Assessing CI stability](#assessing-ci-stability)\n- [Analyzing results](#analyzing-results)\n- [API](#api)\n  - [Measurements](#measurements)\n    - [`measureRenders` function](#measurerenders-function)\n    - [`MeasureRendersOptions` type](#measurerendersoptions-type)\n    - [`measureFunction` function](#measurefunction-function)\n    - [`MeasureFunctionOptions` type](#measurefunctionoptions-type)\n    - [`measureAsyncFunction` function](#measureasyncfunction-function)\n    - [`MeasureAsyncFunctionOptions` type](#measureasyncfunctionoptions-type)\n  - [Configuration](#configuration)\n    - [Default configuration](#default-configuration)\n    - [`configure` function](#configure-function)\n    - [`resetToDefaults` function](#resettodefaults-function)\n    - [Environmental variables](#environmental-variables)\n- [External References](#external-references)\n- [Contributing](#contributing)\n- [License](#license)\n- [Made with ❤️ at Callstack](#made-with-️-at-callstack)\n\n## The problem\n\nYou want your React Native app to perform well and fast at all times. As a part of this goal, you profile the app, observe render patterns, apply memoization in the right places, etc. But it's all manual and too easy to unintentionally introduce performance regressions that would only get caught during QA or worse, by your users.\n\n## This solution\n\nReassure allows you to automate React Native app performance regression testing on CI or a local machine. In the same way, you write your integration and unit tests that automatically verify that your app is still _working correctly_, you can write performance tests that verify that your app is still _working performantly_.\n\nYou can think about it as a React performance testing library. In fact, Reassure is designed to reuse as much of your [React Native Testing Library](https://github.com/callstack/react-native-testing-library) tests and setup as possible.\n\nReassure works by measuring render characteristics – duration and count – of the testing scenario you provide and comparing that to the stable version. It repeats the scenario multiple times to reduce the impact of random variations in render times caused by the runtime environment. Then, it applies statistical analysis to determine whether the code changes are statistically significant. As a result, it generates a human-readable report summarizing the results and displays it on the CI or as a comment to your pull request.\n\nIn addition to measuring component render times it can also measure execution of regular JavaScript functions.\n\n## Installation and setup\n\nTo install Reassure, run the following command in your app folder:\n\nUsing yarn\n\n```sh\nyarn add --dev reassure\n```\n\nUsing npm\n\n```sh\nnpm install --save-dev reassure\n```\n\nYou will also need a working [Jest](https://jestjs.io/docs/getting-started) setup as well as one of either [React Native Testing Library](https://github.com/callstack/react-native-testing-library#installation) or [React Testing Library](https://testing-library.com/docs/react-testing-library/intro).\n\nSee [Installation guide](https://callstack.github.io/reassure/docs/installation).\n\nYou can check our example projects:\n\n- [React Native (Expo)](https://github.com/callstack/reassure-examples/tree/main/examples/native-expo)\n- [React Native (CLI)](https://github.com/callstack/reassure-examples/tree/main/examples/native-cli)\n- [React.js (Next.js)](https://github.com/callstack/reassure-examples/tree/main/examples/web-nextjs)\n- [React.js (Vite)](https://github.com/callstack/reassure-examples/tree/main/examples/native-expo)\n\nReassure will try to detect which Testing Library you have installed. If both React Native Testing Library and React Testing Library are present, it will warn you about that and give precedence to React Native Testing Library. You can explicitly specify Testing Library to be used by using [`configure`](#configure-function) option:\n\n```ts\nconfigure({ testingLibrary: 'react-native' });\n// or\nconfigure({ testingLibrary: 'react' });\n```\n\nYou should set it in your Jest setup file, and you can override it in particular test files if needed.\n\n### Writing your first test\n\nNow that the library is installed, you can write your first test scenario in a file with `.perf-test.js`/`.perf-test.tsx` extension:\n\n```ts\n// ComponentUnderTest.perf-test.tsx\nimport { measureRenders } from 'reassure';\nimport { ComponentUnderTest } from './ComponentUnderTest';\n\ntest('Simple test', async () =\u003e {\n  await measureRenders(\u003cComponentUnderTest /\u003e);\n});\n```\n\nThis test will measure render times of `ComponentUnderTest` during mounting and resulting sync effects.\n\n\u003e **Note**: Reassure will automatically match test filenames using Jest's `--testMatch` option with value `\"**/__perf__/**/*.[jt]s?(x)\", \"**/*.(perf|perf-test).[jt]s?(x)\"`. However, if you want to pass a custom `--testMatch` or `--testRegex` option, you may add it to the `reassure measure` script to pass your own glob. More about `--testMatch` and `--testRegex` in [Jest docs](https://jestjs.io/docs/configuration#testmatch-arraystring)\n\n#### Writing async tests\n\nIf your component contains any async logic or you want to test some interaction, you should pass the `scenario` option:\n\n```ts\nimport { measureRenders } from 'reassure';\nimport { screen, fireEvent } from '@testing-library/react-native';\nimport { ComponentUnderTest } from './ComponentUnderTest';\n\ntest('Test with scenario', async () =\u003e {\n  const scenario = async () =\u003e {\n    fireEvent.press(screen.getByText('Go'));\n    await screen.findByText('Done');\n  };\n\n  await measureRenders(\u003cComponentUnderTest /\u003e, { scenario });\n});\n```\n\nThe body of the `scenario` function is using familiar React Native Testing Library methods.\n\nIn case of using a version of React Native Testing Library lower than v10.1.0, where [`screen` helper](https://callstack.github.io/react-native-testing-library/docs/api/#screen) is not available, the `scenario` function provides it as its first argument:\n\n```ts\nimport { measureRenders } from 'reassure';\nimport { fireEvent } from '@testing-library/react-native';\n\ntest('Test with scenario', async () =\u003e {\n  const scenario = async (screen) =\u003e {\n    fireEvent.press(screen.getByText('Go'));\n    await screen.findByText('Done');\n  };\n\n  await measureRenders(\u003cComponentUnderTest /\u003e, { scenario });\n});\n```\n\nIf your test contains any async changes, you will need to make sure that the scenario waits for these changes to settle, e.g. using `findBy` queries, `waitFor` or `waitForElementToBeRemoved` functions from RNTL.\n\n### Measuring test performance\n\nTo measure your first test performance, you need to run the following command in the terminal:\n\n```sh\nyarn reassure\n```\n\nThis command will run your tests multiple times using Jest, gathering performance statistics and will write them to `.reassure/current.perf` file. To check your setup, check if the output file exists after running the command for the first time.\n\n\u003e **Note:** You can add `.reassure/` folder to your `.gitignore` file to avoid accidentally committing your results.\n\nReassure CLI will automatically try to detect your source code branch name and commit hash when you are using Git. You can override these options, e.g. if you are using a different version control system:\n\n```sh\nyarn reassure --branch [branch name] --commit-hash [commit hash]\n```\n\n### Write performance testing script\n\nTo detect performance changes, you must measure the performance of two versions of your code current (your modified code) and baseline (your reference point, e.g. `main` branch). To measure performance on two branches, you must switch branches in Git or clone two copies of your repository.\n\nWe want to automate this task to run on the CI. To do that, you will need to create a performance-testing script. You should save it in your repository, e.g. as `reassure-tests.sh`.\n\nA simple version of such script, using a branch-changing approach, is as follows:\n\n```sh\n#!/usr/bin/env bash\nset -e\n\nBASELINE_BRANCH=${GITHUB_BASE_REF:=\"main\"}\n\n# Required for `git switch` on CI\ngit fetch origin\n\n# Gather baseline perf measurements\ngit switch \"$BASELINE_BRANCH\"\nyarn install\nyarn reassure --baseline\n\n# Gather current perf measurements \u0026 compare results\ngit switch --detach -\nyarn install\nyarn reassure\n```\n\n## CI setup\n\nTo make setting up the CI integration and all prerequisites more convenient, we have prepared a CLI command to generate all necessary templates for you to start with.\n\nSimply run:\n\n```bash\nyarn reassure init\n```\n\nThis will generate the following file structure\n\n```\n├── \u003cROOT\u003e\n│   ├── reassure-tests.sh\n│   ├── dangerfile.ts/js (or dangerfile.reassure.ts/js if dangerfile.ts/js already present)\n│   └── .gitignore\n```\n\n### Scaffolding\n\n#### CI Script (`reassure-tests.sh`)\n\nBasic script allowing you to run Reassure on CI. More on the importance and structure of this file in the following section.\n\n#### Dangerfile\n\nIf your project already contains a `dangerfile.ts/js`, the CLI will not override it in any way. Instead, it will generate a `dangerfile.reassure.ts/js` file, allowing you to compare and update your own at your convenience.\n\n#### `.gitignore`\n\nIf the `.gitignore` file is present and no mentions of `reassure` appear, the script will append the `.reassure/` directory to its end.\n\n### CI script (`reassure-tests.sh`)\n\nTo detect performance changes, you must measure the performance of two versions of your code current (your modified code) and baseline (your reference point, e.g. `main` branch). To measure performance on two branches, you must switch branches in Git or clone two copies of your repository.\n\nWe want to automate this task to run on the CI. To do that, you will need to create a performance-testing script. You should save it in your repository, e.g. as `reassure-tests.sh`.\n\nA simple version of such script, using a branch-changing approach, is as follows:\n\n```sh\n#!/usr/bin/env bash\nset -e\n\nBASELINE_BRANCH=${GITHUB_BASE_REF:=\"main\"}\n\n# Required for `git switch` on CI\ngit fetch origin\n\n# Gather baseline perf measurements\ngit switch \"$BASELINE_BRANCH\"\nyarn install\nyarn reassure --baseline\n\n# Gather current perf measurements \u0026 compare results\ngit switch --detach -\nyarn install\nyarn reassure\n```\n\n### Integration\n\nAs a final setup step, you must configure your CI to run the performance testing script and output the result.\nFor presenting output at the moment, we integrate with Danger JS, which supports all major CI tools.\n\n#### Updating existing Dangerfile\n\nYou will need a working [Danger JS setup](https://danger.systems/js/guides/getting_started.html).\n\nThen add Reassure Danger JS plugin to your dangerfile:\n\n```ts\n// /\u003cproject_root\u003e/dangerfile.reassure.ts (generated by the init script)\n\nimport path from 'path';\nimport { dangerReassure } from 'reassure';\n\ndangerReassure({\n  inputFilePath: path.join(__dirname, '.reassure/output.md'),\n});\n```\n\n#### Creating Dangerfile\n\nIf you do not have a Dangerfile (`dangerfile.js` or `dangerfile.ts`) yet, you can use the one generated by the `reassure init` script without making any additional changes.\n\nIt is also in our example file [Dangerfile](https://github.com/callstack/reassure/blob/main/dangerfile.ts).\n\n#### Updating the CI configuration file\n\nFinally, run both the performance testing script \u0026 danger in your CI config:\n\n```yaml\n- name: Run performance testing script\n  run: ./reassure-tests.sh\n\n- name: Run Danger.js\n  run: yarn danger ci\n  env:\n    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n```\n\nYou can also check our example [GitHub workflow](https://github.com/callstack/reassure/blob/main/.github/workflows/main.yml).\n\nThe above example is based on GitHub Actions, but it should be similar to other CI config files and should only serve as a reference in such cases.\n\n\u003e **Note**: Your performance test will run much longer than regular integration tests. It's because we run each test scenario multiple times (by default, 10) and repeat that for two branches of your code. Hence, each test will run 20 times by default. That's unless you increase that number even higher.\n\n## Assessing CI stability\n\nWe measure React component render times with microsecond precision during performance measurements using `React.Profiler`. This means the same code will run faster or slower, depending on the machine. For this reason, baseline \u0026 current measurements need to be run on the same machine. Optimally, they should be run one after another.\n\nMoreover, your CI agent needs to have stable performance to achieve meaningful results. It does not matter if your agent is fast or slow as long as it is consistent in its performance. That's why the agent should not be used during the performance tests for any other work that might impact measuring render times.\n\nTo help you assess your machine's stability, you can use the `reassure check-stability` command. It runs performance measurements twice for the current code, so baseline and current measurements refer to the same code. In such a case, the expected changes are 0% (no change). The degree of random performance changes will reflect the stability of your machine.\nThis command can be run both on CI and local machines.\n\nNormally, the random changes should be below 5%. Results of 10% and more are considered too high, meaning you should work on tweaking your machine's stability.\n\n\u003e **Note**: As a trick of last resort, you can increase the `run` option from the default value of 10 to 20, 50 or even 100 for all or some of your tests, based on the assumption that more test runs will even out measurement fluctuations. That will, however, make your tests run even longer.\n\nYou can refer to our example [GitHub workflow](https://github.com/callstack/reassure/blob/main/.github/workflows/stability.yml).\n\n## Analyzing results\n\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"https://github.com/callstack/reassure/raw/main/packages/reassure/docs/report-markdown.png\" width=\"920px\" alt=\"Markdown report\" /\u003e\n\u003c/p\u003e\n\nLooking at the example, you can notice that test scenarios can be assigned to certain categories:\n\n- **Significant Changes To Duration** shows test scenarios where the performance change is statistically significant and **should** be looked into as it marks a potential performance loss/improvement\n- **Meaningless Changes To Duration** shows test scenarios where the performance change is not statistically significant\n- **Changes To Count** shows test scenarios where the render or execution count did change\n- **Added Scenarios** shows test scenarios which do not exist in the baseline measurements\n- **Removed Scenarios** shows test scenarios which do not exist in the current measurements\n\n## API\n\n### Measurements\n\n#### `measureRenders` function\n\nCustom wrapper for the RNTL `render` function responsible for rendering the passed screen inside a `React.Profiler` component,\nmeasuring its performance and writing results to the output file. You can use the optional `options` object that allows customizing aspects\nof the testing\n\n```ts\nasync function measureRenders(\n  ui: React.ReactElement,\n  options?: MeasureRendersOptions,\n): Promise\u003cMeasureResults\u003e {\n```\n\n#### `MeasureRendersOptions` type\n\n```ts\ninterface MeasureRendersOptions {\n  runs?: number;\n  warmupRuns?: number;\n  removeOutliers?: boolean;\n  wrapper?: React.ComponentType\u003c{ children: ReactElement }\u003e;\n  scenario?: (view?: RenderResult) =\u003e Promise\u003cany\u003e;\n  writeFile?: boolean;\n  beforeEach?: () =\u003e Promise\u003cvoid\u003e | void;\n  afterEach?: () =\u003e Promise\u003cvoid\u003e | void;\n}\n```\n\n- **`runs`**: number of runs per series for the particular test\n- **`warmupRuns`**: number of additional warmup runs that will be done and discarded before the actual runs (default 1).\n- **`removeOutliers`**: should remove statistical outlier results (default: `true`)\n- **`wrapper`**: React component, such as a `Provider`, which the `ui` will be wrapped with. Note: the render duration of the `wrapper` itself is excluded from the results; only the wrapped component is measured.\n- **`scenario`**: a custom async function, which defines user interaction within the UI by utilising RNTL or RTL functions\n- **`writeFile`**: should write output to file (default `true`)\n- **`beforeEach`**: function to execute before each test run.\n- **`afterEach`**: function to execute after each test run.\n\n#### `measureFunction` function\n\nAllows you to wrap any synchronous function, measure its execution times and write results to the output file. You can use optional `options` to customize aspects of the testing. Note: the execution count will always be one.\n\n```ts\nasync function measureFunction(\n  fn: () =\u003e void,\n  options?: MeasureFunctionOptions\n): Promise\u003cMeasureResults\u003e {\n```\n\n#### `MeasureFunctionOptions` type\n\n```ts\ninterface MeasureFunctionOptions {\n  runs?: number;\n  warmupRuns?: number;\n  removeOutliers?: boolean;\n  writeFile?: boolean;\n  beforeEach?: () =\u003e Promise\u003cvoid\u003e | void;\n  afterEach?: () =\u003e Promise\u003cvoid\u003e | void;\n}\n```\n\n- **`runs`**: number of runs per series for the particular test\n- **`warmupRuns`**: number of additional warmup runs that will be done and discarded before the actual runs\n- **`removeOutliers`**: should remove statistical outlier results (default: `true`)\n- **`writeFile`**: should write output to file (default `true`)\n- **`beforeEach`**: function to execute before each test run.\n- **`afterEach`**: function to execute after each test run.\n\n#### `measureAsyncFunction` function\n\nAllows you to wrap any **asynchronous** function, measure its execution times and write results to the output file. You can use optional `options` to customize aspects of the testing. Note: the execution count will always be one.\n\n\u003e **Note**: Measuring performance of asynchronous functions can be tricky. These functions often depend on external conditions like I/O operations, network requests, or storage access, which introduce unpredictable timing variations in your measurements. For stable and meaningful performance metrics, **always ensure all external calls are properly mocked in your test environment to avoid polluting your performance measurements with uncontrollable factors.**\n\n```ts\nasync function measureAsyncFunction(\n  fn: () =\u003e Promise\u003cunknown\u003e,\n  options?: MeasureAsyncFunctionOptions\n): Promise\u003cMeasureResults\u003e {\n```\n\n#### `MeasureAsyncFunctionOptions` type\n\n```ts\ninterface MeasureAsyncFunctionOptions {\n  runs?: number;\n  warmupRuns?: number;\n  removeOutliers?: boolean;\n  writeFile?: boolean;\n  beforeEach?: () =\u003e Promise\u003cvoid\u003e | void;\n  afterEach?: () =\u003e Promise\u003cvoid\u003e | void;\n}\n```\n\n- **`runs`**: number of runs per series for the particular test\n- **`warmupRuns`**: number of additional warmup runs that will be done and discarded before the actual runs\n- **`removeOutliers`**: should remove statistical outlier results (default: `true`)\n- **`writeFile`**: should write output to file (default `true`)\n- **`beforeEach`**: function to execute before each test run.\n- **`afterEach`**: function to execute after each test run.\n\n### Configuration\n\n#### Default configuration\n\nThe default config which will be used by the measuring script. This configuration object can be overridden with the use\nof the `configure` function.\n\n```ts\ntype Config = {\n  runs?: number;\n  warmupRuns?: number;\n  outputFile?: string;\n  verbose?: boolean;\n  testingLibrary?:\n    | 'react-native'\n    | 'react'\n    | { render: (component: React.ReactElement\u003cany\u003e) =\u003e any; cleanup: () =\u003e any };\n};\n```\n\n```ts\nconst defaultConfig: Config = {\n  runs: 10,\n  warmupRuns: 1,\n  outputFile: '.reassure/current.perf',\n  verbose: false,\n  testingLibrary: undefined, // Will try auto-detect first RNTL, then RTL\n};\n```\n\n**`runs`**: the number of repeated runs in a series per test (allows for higher accuracy by aggregating more data). Should be handled with care.\n\n- **`warmupRuns`**: the number of additional warmup runs that will be done and discarded before the actual runs.\n  **`outputFile`**: the name of the file the records will be saved to\n  **`verbose`**: make Reassure log more, e.g. for debugging purposes\n  **`testingLibrary`**: where to look for `render` and `cleanup` functions, supported values `'react-native'`, `'react'` or object providing custom `render` and `cleanup` functions\n\n#### `configure` function\n\n```ts\nfunction configure(customConfig: Partial\u003cConfig\u003e): void;\n```\n\nThe `configure` function can override the default config parameters.\n\n#### `resetToDefaults` function\n\n```ts\nresetToDefaults(): void\n```\n\nReset the current config to the original `defaultConfig` object\n\n#### Environmental variables\n\nYou can use available environmental variables to alter your test runner settings.\n\n- `TEST_RUNNER_PATH`: an alternative path for your test runner. Defaults to `'node_modules/.bin/jest'` or on Windows `'node_modules/jest/bin/jest'`\n- `TEST_RUNNER_ARGS`: a set of arguments fed to the runner. Defaults to `'--runInBand --testMatch \"**/__perf__/**/*.[jt]s?(x)\", \"**/*.(perf|perf-test).[jt]s?(x)\"'`\n\nExample:\n\n```sh\nTEST_RUNNER_PATH=myOwnPath/jest/bin yarn reassure\n```\n\n## External References\n\n- [The Ultimate Guide to React Native Optimization 2024 Edition](https://www.callstack.com/campaigns/download-the-ultimate-guide-to-react-native-optimization?utm_campaign=RN_Performance\u0026utm_source=readme_reassure) - Mentioned in \"Make your app consistently fast\" chapter.\n\n## Contributing\n\nSee the [contributing guide](CONTRIBUTING.md) to learn how to contribute to the repository and the development workflow.\n\n## License\n\n[MIT](./LICENSE)\n\n## Made with ❤️ at Callstack\n\nReassure is an Open Source project and will always remain free to use. The project has been developed in close\npartnership with [Entain](https://entaingroup.com/) and was originally their in-house project. Thanks to their\nwillingness to develop the React \u0026 React Native ecosystem, we decided to make it Open Source. If you think it's cool, please star it 🌟\n\nCallstack is a group of React and React Native experts. If you need help with these or want to say hi, contact us at hello@callstack.com!\n\nLike the project? ⚛️ [Join the Callstack team](https://callstack.com/careers/?utm_campaign=Senior_RN\u0026utm_source=github\u0026utm_medium=readme) who does amazing stuff for clients and drives React Native Open Source! 🔥\n","funding_links":[],"categories":["TypeScript"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcallstack%2Freassure","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcallstack%2Freassure","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcallstack%2Freassure/lists"}