Skip to content

Commit

Permalink
Write some more documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
TrisCC committed Jun 17, 2024
1 parent 861b11f commit d189764
Show file tree
Hide file tree
Showing 4 changed files with 57 additions and 43 deletions.
68 changes: 41 additions & 27 deletions docs/how-to-guides.md → docs/examples.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,43 @@
# How-To-Guides
# Examples

In this section there are a couple examples of how you can use Auto-Verify.

## Using Datasets

If you have your own models to use with Auto-Verify, you can use those. However, if you just want to try it out, we recommend using the datasets that are provided for the VNNCOMP competition. Auto-Verify supports reading benchmarks defined in VNNCOMP style, which are benchmarks with the following structure:

```
vnncomp2022
└── test_props
├── instances.csv
├── onnx
│   ├── test_nano.onnx
│   ├── test_sat.onnx
│   └── test_unsat.onnx
└── vnnlib
├── test_nano.vnnlib
└── test_prop.vnnlib
```

Where `instances.csv` is a `csv` file with 3 columns: network, property, timeout. For example, the `test_props` directory contains the following 3 verification instaces:

```
onnx/test_sat.onnx,vnnlib/test_prop.vnnlib,60
onnx/test_unsat.onnx,vnnlib/test_prop.vnnlib,60
onnx/test_nano.onnx,vnnlib/test_nano.vnnlib,60
```

You can find the datasets from the previous years here:

- VNNCOMP 2023: https://github.com/stanleybak/vnncomp2023

- VNNCOMP 2022: https://github.com/stanleybak/vnncomp2022

- VNNCOMP 2021: https://github.com/stanleybak/vnncomp2021

- VNNCOMP 2020: https://github.com/verivital/vnn-comp

After downloading one of these datasets, you need to unzip the data files that you need or use the scripts provided in the repositories to prepare the files.

## Verifying Properties

Expand Down Expand Up @@ -29,32 +68,7 @@ if __name__ == "__main__":

### Running VNNCOMP Benchmarks

Auto-Verify supports reading benchmarks defined in VNNCOMP style, which are benchmarks with the following structure:

```
vnncomp2022
└── test_props
├── instances.csv
├── onnx
│   ├── test_nano.onnx
│   ├── test_sat.onnx
│   └── test_unsat.onnx
└── vnnlib
├── test_nano.vnnlib
└── test_prop.vnnlib
```

Where `instances.csv` is a `csv` file with 3 columns: network, property, timeout. For example, the `test_props` directory contains the following 3 verification instaces:

```
onnx/test_sat.onnx,vnnlib/test_prop.vnnlib,60
onnx/test_unsat.onnx,vnnlib/test_prop.vnnlib,60
onnx/test_nano.onnx,vnnlib/test_nano.vnnlib,60
```

VNNCOMP Benchmarks can found at the following links: [2022](https://github.com/ChristopherBrix/vnncomp2022_benchmarks/tree/main/benchmarks), [2023](https://github.com/ChristopherBrix/vnncomp2023_benchmarks/tree/main/benchmarks). Make sure to unzip all files inside the benchmark after you have downloaded it.

Below is a code snippet that runs this benchmark. Note the `Path` pointing to the benchmark location.
Below is a code snippet that runs the VNNCOMP test_props benchmark. Note the `Path` pointing to the benchmark location.

```py
from pathlib import Path
Expand Down
16 changes: 5 additions & 11 deletions docs/getting-started.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,12 @@
## Getting Started

### System requirements

Auto-Verify has only been tested for Linux and will not work on MacOS and Windows. Alternatively, using [Windows Subsystem for Linux](https://learn.microsoft.com/nl-nl/windows/wsl/about) is also an option.

If you want to make use of the GPU based verification algorithms, you will need a CUDA-enabled GPU.
Additionally, if you want to make use of the GPU based verification algorithms, you will need a CUDA-enabled GPU.

### Installing Auto-Verify

First, install [Miniconda](https://docs.conda.io/projects/miniconda/en/latest/). Miniconda is used to manage the environments of different verification tools, other environment managers will _not_ work.

!!! warning

Anaconda can fail trying to install environments in some cases where Miniconda does not.
Expand Down Expand Up @@ -56,10 +53,7 @@ To uninstall a verifier, run:

If you have your own models to use with Auto-Verify, you can use those. However, if you just want to try it out, we recommend using the datasets that are provided for the VNNCOMP competition. These datasets are already made compatible with Auto-Verify and APIs are available to directly work with them. You can find the datasets from the previous years over here:

- VNNCOMP 2023: https://github.com/stanleybak/vnncomp2023

- VNNCOMP 2022: https://github.com/stanleybak/vnncomp2022

- VNNCOMP 2021: https://github.com/stanleybak/vnncomp2021

- VNNCOMP 2020: https://github.com/verivital/vnn-comp
- [VNNCOMP 2023](https://github.com/stanleybak/vnncomp2023)
- [VNNCOMP 2022](https://github.com/stanleybak/vnncomp2022)
- [VNNCOMP 2021](https://github.com/stanleybak/vnncomp2021)
- [VNNCOMP 2020](https://github.com/verivital/vnn-comp)
14 changes: 10 additions & 4 deletions docs/about.md → docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,12 @@

Auto-Verify is a tool that provides interfaces, parameter spaces and installation managers for different neural network verification tools.

You can use the following guides to get started with Auto-Verify:

- [Getting Started](getting-started.md)
- [Examples](examples.md)
- [Contributing](contributing.md)

<!-- TODO: Write some information about the portfolio construction algorithms -->

## Included verification algorithms
Expand All @@ -10,10 +16,10 @@ For the creation of a portfolio of different verification algorithms, Auto-Verif

### CPU based verification algorithms

- [NNENUM](https://github.com/stanleybak/nnenum)
- [Verinet](https://github.com/vas-group-imperial/VeriNet)
- [nnenum](https://github.com/stanleybak/nnenum) (_Stanley Bak_)
- [VeriNet](https://github.com/vas-group-imperial/VeriNet) (_VAS Group_)

### GPU based verification algorithms

- [Alpha-Beta Crown](https://github.com/Verified-Intelligence/alpha-beta-CROWN)
- [OVALBAB](https://github.com/oval-group/oval-bab)
- [AB-Crown](https://github.com/Verified-Intelligence/alpha-beta-CROWN) (_Zhang et al_)
- [Oval-BaB](https://github.com/oval-group/oval-bab) (_OVAL Research Group_)
2 changes: 1 addition & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,6 @@ markdown_extensions:
nav:
- About: index.md
- Getting started: getting-started.md
- How-to guides: how-to-guides.md
- Examples: examples.md
- API: api.md
- Contributing: contributing.md

0 comments on commit d189764

Please sign in to comment.