https://github.com/unidata/thredds-performance-tests
https://github.com/unidata/thredds-performance-tests
Last synced: 8 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/unidata/thredds-performance-tests
- Owner: Unidata
- Created: 2023-07-26T20:25:14.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2024-08-26T21:31:48.000Z (over 1 year ago)
- Last Synced: 2025-04-08T13:50:25.809Z (9 months ago)
- Language: Python
- Size: 80.1 KB
- Stars: 1
- Watchers: 8
- Forks: 3
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Thredds performance tests
## Goals:
- locally do performance testing before and after a fix
- automated performance regression tests
## To run:
Requires docker and docker compose.
Currently the test data is in a sub directory in the [thredds-test-data](https://github.com/Unidata/thredds-test-data). To mount the test data create a file `tds/.env` which contains an environment variable with the proper path, e.g.
```
DATA_DIR=/my/path/to/thredds-test-data/local/thredds-test-data/cdmUnitTest/thredds-performance-tests
```
To start TDS, run all tests, and stop TDS:
```
./run-all.sh
```
## To run the TDS:
We build a TDS docker image so we are sure to get the latest snapshot. For now, we don't include the netcdf-c library, as that takes 30 minutes to build.
In the tds directory (`cd tds/`), build the testing docker image:
```
docker build -t thredds-performance-tests:latest .
```
To build using a local war file instead of the one from nexus, use:
```
cp /path/to/my/thredds.war local-war-file/
docker build -t thredds-performance-tests:latest --build-arg USE_LOCAL_WAR=true .
```
To start TDS with caching
```
./start-default.sh
```
or without caching
```
./start-no-caching.sh
``````
To stop:
```
./stop.sh
```
## To run the tests:
### With docker
```
cd tests/
docker build --no-cache -t performance-tests:latest .
docker run --rm --network="host" -v ./results/:/usr/tests/results/ performance-tests
```
### With local python environment
Must have python3, pip, and ab (ApacheBench) installed.
For info about the tests parameters that can be set, see
```
./tests/run.py --help
```
To run:
```
cd tests/
pip install -r requirements.txt
./run.py
```
The results of the tests are written to `tests/results/results.csv`
## To add a test case
- Either add new catalog to `tds/thredds/catalogs` to be picked up by the `catalogScan`
or else add a file to `tds/data` to be picked up by the `datasetScan`.
- Add a new json file or append to the existing json configs in `tests/configs`, including an "id" and "name" for the test and what url will be hit.
The test id should be unique. The JSON schema used to validate a test is located in `run.py`.
Note that the response code is not currently checked in the tests but you can see if requests failed in the logs (`results/run.log`).