Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/yanzuochen/obsan

OBsan: An Out-Of-Bound Sanitizer to Harden DNN Executables
https://github.com/yanzuochen/obsan

Last synced: 17 days ago
JSON representation

OBsan: An Out-Of-Bound Sanitizer to Harden DNN Executables

Awesome Lists containing this project

README

        

# OBSan

## Overview

OBSan is an implementation of the paper "OBsan: An Out-Of-Bound Sanitizer to
Harden DNN Executables," based on TVM.

This repo contains the source code for OBSan, together with the dataset
preparation tools, evaluation scripts, and downstream applications used in the
paper.

## Source Code Organization

This project requires a working TVM installation. A recommended setup is to
clone the [TVM](https://github.com/apache/tvm) repo into a directory `tvm/` and
then clone this repo into a subdirectory `tvm/obsan/`. In this setup, the code
organization is as follows:

```
tvm/ # TVM root
|- obsan/ # OBSan root
|- apps/ # Downstream applications
|- eval/ # Evaluation tools
|- support/ # Support libraries / dataset utilities
|- backward.py
|- cgutils.py
|- ...
```

## Getting Started

### 1. Install TVM

First clone the TVM repo and checkout the commit OBSan is developed on:

```sh
git clone --recursive https://github.com/apache/tvm tvm
cd tvm
git checkout b6b0bafde
git submodule update
```

Then build the TVM shared libraries following the [official
instructions](https://tvm.apache.org/docs/install/from_source.html#build-the-shared-library).
The build output should be under `tvm/built/`.

### 2. Set Up OBSan

Clone this repo into `tvm/obsan/`:

```sh
git clone https://github.com/yanzuochen/obsan
```

The project is developed on Python 3.8.12 and Ubuntu 18.04. To ensure the best
compatibility, the setup tools will invoke
[pyenv](https://github.com/pyenv/pyenv) to install the same version and
initialize a virtual environment with all the dependencies. Run the following
commands to automate this step:

```sh
cd obsan
./setup.sh
source obsan-env.sh
```

### 3. Prepare Datasets

In this step, we need to download
[CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) as the training and
validation datasets and [ChestX-ray8](https://arxiv.org/abs/1705.02315) as the
undefined images dataset. We also run a few scripts to generate the AE and
perception-broken datasets for each of the three models (paths in the scripts
may need to be changed first):

```sh
python ./support/aegen/aegen.py --model
python ./support/broken.py
```

### 4. Reproducing Evaluation Results

Evaluation scripts can be found in the directory `tvm/obsan/eval/` and can be
used for reproducing tables III-VIII as well as tables XI and XII in
appendices. Results will be saved to `tvm/obsan/results/`. A description of
each of the evaluation scripts is as follows:

```sh
python ./eval/evaluation_base.py # Tables III-V, XIV, XV
python ./eval/evaluation_sel.py # Table VI, XVI
python ./eval/evaluation_bob.py # Tables VII, VIII, XVII
```

### 5. Reproducing Downstream Application Results

The two downstream applications, namely online AE generation prevention (Sec.
IX.A; Table IX) and feedback-driven fuzzing (Sec. IX.B; Table X), are available
at `tvm/obsan/apps/bae/` and `tvm/obsan/apps/fuzz.py`, respectively.

#### 5.1. Online AE Generation Prevention

To launch the online AE attack in the __default__ scenario which allows a
perturbation budget of eps = 0.3 and 50 queries per seed, use the following
commands:

```sh
./apps/bae/attack.sh none 0.3 50 # Without OBSan
./apps/bae/attack.sh NBC 0.3 50 # With FOBSan
./apps/bae/attack.sh gn2 0.3 50 # With BOBSan
./apps/bae/attack.sh NBC+gn2 0.3 50 # With HOBSan
```

For the __sophisticated__ scenario with a perturbation budget of 0.035 and 500
queries per seed, use the following commands instead:

```sh
./apps/bae/attack.sh none 0.035 500 # Without OBSan
./apps/bae/attack.sh NBC 0.035 500 # With FOBSan
./apps/bae/attack.sh gn2 0.035 500 # With BOBSan
./apps/bae/attack.sh NBC+gn2 0.035 500 # With HOBSan
```

#### 5.2. Feedback-Driven Fuzzing

To launch the fuzzing task and reproduce table X, use the following commands:

```sh
./fuzz.py --model --blind # Blackbox
./fuzz.py --model # Greybox
```

Results will be saved to `tvm/obsan/results/fuzz/`.

#### 6. Empirical Comparison with Other Works

This part of results are already generated by `./eval/evaluation_base.py` in
Step 4.