Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/formalsec/test-comp
Test-Comp benchmarking scripts for wasp-c
https://github.com/formalsec/test-comp
Last synced: 24 days ago
JSON representation
Test-Comp benchmarking scripts for wasp-c
- Host: GitHub
- URL: https://github.com/formalsec/test-comp
- Owner: formalsec
- Created: 2021-12-08T11:25:20.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2024-03-13T14:45:33.000Z (10 months ago)
- Last Synced: 2024-11-07T04:07:29.184Z (2 months ago)
- Language: Python
- Homepage:
- Size: 30 MB
- Stars: 0
- Watchers: 4
- Forks: 2
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE.Apache-2.0.txt
Awesome Lists containing this project
README
# Running
- Clone the repository and submodules
```shell-session
$ git clone https://github.com/formalsec/Test-Comp.git
$ cd Test-Comp
$ git submodule update --init
```- Run the script
```shell-session
$ ./run.py
```- To run vairous instances in parellel, specify the number of threads in `-j`:
```shell-session
$ ./run.py -j 10
```# Selecting benchmarks
To run more or fewer benchmarks, edit owi's benchmark definition file at
[share/owic.xml](share/owic.xml). Simply comment or uncomment the XML tag
`` for the benchmarks you wish to run or skip.# Results
Currently, this script executes owi and searches for the string `Reached problem!`
in the stderr to verify if owi found the problem in the benchmark or not.
**Testsuite validation using test-suite-validator appears to be broken at the moment.**This script will create a file `results/all.csv` with the answer to every benchmark
in the dataset. The answer column in this file will be `True` if owi detected a
problem, `False` if not, and `Timeout` if owi timed out.