Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/softsec-kaist/fuzzle
Fuzzle: Making a Puzzle for Fuzzers (ASE'22)
https://github.com/softsec-kaist/fuzzle
benchmark fuzzing
Last synced: about 1 month ago
JSON representation
Fuzzle: Making a Puzzle for Fuzzers (ASE'22)
- Host: GitHub
- URL: https://github.com/softsec-kaist/fuzzle
- Owner: SoftSec-KAIST
- License: mit
- Created: 2022-07-21T10:28:42.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2024-03-03T06:32:48.000Z (10 months ago)
- Last Synced: 2024-10-21T04:39:49.157Z (2 months ago)
- Topics: benchmark, fuzzing
- Language: Python
- Homepage:
- Size: 342 MB
- Stars: 42
- Watchers: 5
- Forks: 8
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE.md
Awesome Lists containing this project
README
# Fuzzle
Fuzzle is a bug synthesizer that generates buggy benchmarks for evaluating
fuzzers. Fuzzle uses randomly created mazes and path constraints from previous
CVEs to generate subject programs. The details of Fuzzle can be found in our
paper "Fuzzle: Making a Puzzle for Fuzzers" (ASE 2022).## Installation
To build Fuzzle and setup the docker and python environment, run the following:
```
$ git clone https://github.com/SoftSec-KAIST/Fuzzle
$ cd Fuzzle# Build docker images (this step may take a few hours to complete)
$ ./scripts/build_all_dockers.sh# Install the dependencies
$ python3 -m pip install -r ./maze-gen/requirements.txt
```
Note that you need python 3.7+ and Z3 solver to generate benchmark programs
using Fuzzle.For more detailed installation instructions, please see `INSTALL.md`.
## Extension for Multiple Bugs
Fuzzle was modified to generate programs that contain multiple bugs for evaluating multi-target directed fuzzers. To use this version of Fuzzle, please go to [multi-target](https://github.com/SoftSec-KAIST/Fuzzle/tree/multi-target) branch of this repository.
The details of the modifications can be found in the paper "On the Effectiveness of Synthetic Benchmarks for Evaluating Directed Grey-box Fuzzers" (APSEC 2023).
## Usage
For a quick tutorial on how to use Fuzzle, please skip to the
[tutorial](#tutorial).### Generating a benchmark
You can generate a buggy benchmark using `generate.sh` in the `scripts`
directory. You need to specify values for the following parameters using the
command line arguments:
- Maze generation algorithm (`-a`): Backtracking, Kruskal, Prims, Wilsons,
Sidewinder
- Width of the maze (`-w`): any integer greater than 2
- Height of the maze (`-h`): any integer greater than 2
- Output directory (`-o`): path to the output directoryYou can optionally specify the following parameters:
- Pseudorandom seed (`-r`): any integer between 0 and 2^32 - 1
- Percentage of cycles (`-c`): any integer between 0 and 100
- Maze exit (`-e`): default or random
- Generator (`-g`): default_gen or CVE_genNote, if you use CVE_gen for the generator, you must also provide a path to SMT
file with `-s` option.For example, the following command generates a default program based on a 10x10
maze created with Wilson's algorithm.```
$ ./scripts/generate.sh -a Wilsons -w 10 -h 10 -o
```A different program can be generated using the same maze but with realistic
constraints obtained from the previous CVE (e.g., CVE-2016-4487).```
$ ./scripts/generate.sh -a Wilsons -w 10 -h 10 -g CVE_gen -s ./CVEs/CVE-2016-4487.smt2 -o
```After the script finishes running, the output directory specified with
`` will contain 5 subdirectories as follows:- `src`: contains C source code of generated programs
- `bin`: contains compiled binaries of generated programs
- `txt`: contains array form of mazes used in generating programs
- `png`: contains images (.png files) of mazes used
- `sln`: contains the shortest solution path for each maze### Generating a large benchmark
To generate multiple programs for building a large benchmark, you can use
`generate_benchmark.py` as follows:```
$ cd scripts
$ python3 ./generate_benchmark.py
```Here, `` is a path to the file that specifies the configuration of
programs to be generated. The template for the configuration of a program is:```
,,,,,,
```Below is an example file for the two programs generated above with
`generate.sh`.```
Wilsons,10,10,1,1,100percent,default_gen
Wilsons,10,10,1,1,100percent,CVE-2016-4487_gen
```### Running a fuzzing experiment
To run fuzzers on the generated benchmark in a separate docker container:
```
$ python3 ./scripts/run_tools.py
```An example of the configuration file (``) is provided below.
```
{
"MazeList" : "../examples/example1.list",
"Repeats" : 1,
"Duration" : 60,
"MazeDir" : "../examples/example1_benchmark",
"Tools" : ["afl", "afl++", "aflgo", "eclipser", "fuzzolic"]
}
```- `MazeList`: path to the list of programs in the benchmark
- `Repeats`: number of repeats for each fuzzer
- `Duration`: length of fuzzing campaign in minutes
- `MazeDir`: path to a directory that contains benchmark programs
- `Tools`: one or more of the available fuzzers (`afl`, `afl++`, `aflgo`,
`eclipser`, `fuzzolic`)Note that all paths (`MazeList` and `MazeDir`) should be either absolute paths
or relative paths from the `scripts` directory.More examples of configuration files are provided under `examples` directory.
Running each example should take about 1 hour. Please refer to
[Examples](#examples) section for more details.After the experiment is finished, the output directory (``) will
contain generated testcases from each run as well as code coverage measured with
gcov.### Storing and summarizing results
Once the fuzzing campaign is finished, the coverage and bug finding results can
be summarized in csv format using the script as follows:```
$ ./scripts/save_results.sh
```- ``: directory that contains fuzzing outputs
- ``: path to save the output fileThe script summarizes the fuzzing results based on the given parameter and
fuzzing duration. For example, by running:```
$ ./scripts/save_results.sh Algorithm 24 paper
```
each fuzzer's performance after 24 hours across different maze generation
algorithms will be displayed on the screen.- ``: one of four parameters of Fuzzle (`Algorithm`, `Size`, `Cycle`, `Generator`)
- ``: length of fuzzing campaign in hours
- ``: `paper` for displaying tables similar to the ones shown in the
paper, `fuzzer` for displaying summarized results for each fuzzer.### Visualizing coverage results
You can also visualize the code coverage results in the form of 2D maze, where
each cell represents a function in the program. The script for visualization
takes maze array (.txt), coverage results (.gcov), and maze size as input and
returns a png file.```
$ python3 ./scripts/visualize.py
```
- ``: path to .txt file for the chosen maze. Note that such files are
automatically created under the `txt` directory when generating a benchmark.
- ``: path to .gcov file that contains coverage information. The
script for running fuzzers (`run_tools.py`) saves hourly coverage results in
the output directory.### Examples
There are 3 sets of examples (`example#.list` and `example#.conf`) each of which
takes about 1 hour to run.To generate the benchmark and test the fuzzers using these examples, run the
following commands from the `scripts` directory:```
$ python3 ./generate_benchmark.py ../examples/example#.list ../examples/example#_benchmark
$ python3 ./run_tools.py ../examples/example#.conf ../examples/example#_outputs
$ ./save_results.sh ../examples/example#_outputs ../examples/example#_outputs/example#_results 0 paper > ../examples/example#_outputs/example#_summary
$ cat ../examples/example#_outputs/example#_summary
```Note that you should replace # with the example number. Also, you should use the
matching pair to run the experiment. For example, if you used `example1.list` to
generate the benchmark, you should use `example1.conf` to run the fuzzers.Because each example varies different parameters, `` for
`save_results.sh` script is specific to each example.- For example 1: `Algorithm`
- For example 2: `Size`
- For example 3: `Generator`The fuzzing results for each experiment will be saved in the `example#_outputs`
directory.### Tutorial
In this tutorial, we will generate a benchmark consisting of 5 programs and run
AFL on the generated benchmark for 5 minutes per program. We will then visualize
the coverage results from one of the fuzzing campaigns using Fuzzle.1. Generate the benchmark with:
```
$ cd scripts
$ python3 ./generate_benchmark.py ../tutorial/programs.list ../tutorial/benchmark
```
Note that `programs.list` contains the configurations of 5 programs to be
generated.2. Run AFL:
```
$ python3 ./run_tools.py ../tutorial/run.conf ../tutorial/outputs
```
The `run.conf` file specifies the path to the generated benchmark, the duration
of the fuzzing run (5 minutes), and the fuzzer to run (AFL). This step will take
approximately 1 hour.3. Store and visualize results:
```
$ ./save_results.sh ../tutorial/outputs ../tutorial/results Algorithm 0 paper > ../tutorial/summary
$ cat ../tutorial/summary
$ PATH_TO_TXT=../tutorial/benchmark/txt/Wilsons_20x20_1_1.txt
$ PATH_TO_COV=../tutorial/outputs/cov_gcov_Wilsons_20x20_1_1_25percent_default_gen_afl_0/Wilsons_20x20_1_1_25percent_default_gen_afl_0.c.gcov
$ python3 ./visualize.py $PATH_TO_TXT $PATH_TO_COV ../tutorial/wilsons 20
```
The fuzzing results will be stored in `results.csv` and the summary of results
will be printed out to the screen. You can also check the visual aid generated
(`wilsons.png`) in the `tutorial` directory.## Artifact
We publicize the artifacts to help reproduce the experiments in our paper. Please
check our [Fuzzle-Artifact](https://doi.org/10.5281/zenodo.7031393) repository.## Citation
If you use Fuzzle in scientific work, consider citing our paper:
```bibtex
@INPROCEEDINGS{lee:ase:2022,
author = {Haeun Lee and Soomin Kim and Sang Kil Cha},
title = {{Fuzzle}: Making a Puzzle for Fuzzers},
booktitle = {Proceedings of the International Conference on Automated Software Engineering},
year = 2022
}
```