We present an approach to introduce constraints to unconstrained adversarial machine learning evasion attacks. The technique is founded on a constraint validation algorithm, Contraint Guaranteed Evasion (CGE), that guarantees generated adversarial examples also satisfy domain constraints.
This repository includes a full CGE implemenation, and an experimental setup for running various adversarial evasion attacks, enhanced with CGE, on different data sets and victim classifiers.
Experiment options
- Attacks: Projected Gradient Descent (PGD), Zeroth-Order Optimization (ZOO), HopSkipJump attack.
- Classifiers: Keras deep neural network and tree-based ensemble XGBoost.
- Data sets: Four different data sets from various domains.
- Constraints: Constraints are configurable experiment inputs and config files show how to specify them.
- Comparison attack: Constrained Projected Gradient Descent (C-PGD) by Simonetto et al.
Data sets
- IoT-23 - Malicious and benign IoT network traffic; 10,000 rows, 2 classes (sampled).
- UNSW-NB15 - Network intrusion dataset with 9 attacks; 10,000 rows, 2 classes (sampled).
- URL - Legitimate and phishing URLs; 11,430 rows, 2 classes (full data, not sampled).
- LCLD - Kaggle's All Lending Club loan data; 20,000 rows, 2 classes (sampled).
Notes on preprocessing and sampling
- The input data must be numeric and parse to a numeric type.
- Categorical attributes must be one-hot encoded.
- Data should not be normalized (otherwise constraints must include manual scaling).
- All data sets have 50/50 class label distribution.
- The provided sampled data sets were generated by random sampling without replacement.
A single experiment consists of:
- configration, data preprocessing, and setup
- training a classification model on a choice data set
- applying an adversarial attack against the victim model
- scoring the adversarial examples, and
- recording the result.
Steps 2-4 are repeated k times, for k-folds of input data. In step 3, a constraint-validation approach can be enabled, impacting the validity of the generated adversarial examples.
┌───────────────┐ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ ○───┤ args-parser ├─────┤ setup ├─────┤ run ├─────┤ end ├───◎ └───────────────┘ └───────────────┘ └───────────────┘ └───────────────┘ inputs: * preprocess data k times: write result * data set * init classifier 1. train model * constraints * init attack 2. attack * other configs * init validation 3. score
- Complete implementation of the CGE algorithm is in
cge
directory. - Examples of (static) constraint definitions are in
config
. - Constraints are converted to executable form using a preprocessor.
- Examples of integrating CGE to evasion attacks: example 1, example 2, example 3.
. ├─ .github/ Automated workflows, development instructions ├─ cge/ CGE algorithm implementation ├─ comparison/ C-PGD attack implementation and its license ├─ config/ Experiment configuration files ├─ data/ │ ├─ feature_*.csv C-PGD feature files │ └─ * Preprocessed input data ├─ exp/ Source code for running experiments ├─ plot/ Plotting of experiment results ├─ ref_result/ Referential result for comparison ├─ test/ Unit tests (for development) ├─ LICENSE ├─ Makefile Pre-configured commands to ease running experiments └─ requirements.txt Software dependencies
Software requirements
Check your environment using the following command, and install/upgrade as necessary.
python3 --version & make --version
Install dependencies
pip install -r requirements.txt --user
⏱️ 24—48h
Run attack evaluations.
Run experiments for all supported combinations of data sets
make attacks -- run all attacks, using constraint enforcement. make original -- run all attacks, but without validation (ignore constraints).
⏱️ 30 min—3 h
Run constraint performance test.
Uses increasing number of constraints and increasing complexity of constraints, to measure performance impact of introducing constraints to an attack.
This experiment runs PGD and CPGD and VPGD attacks on a neural network classifier trained on UNSW-NB15 data set.
make perf
Plots. Generate plots of experiment results.
make plots
Comparison plot. To plot results from some other directory, e.g. ref_result
, append a directory name.
make plots DIR=ref_result
The default experiment options are defined statically.
An experiment run can be customized further with command line arguments, to override the static options.
To run custom experiments, call the exp
module directly.
python3 -m exp [PATH] {ARGS}
For a list of supported arguments, run:
python3 -m exp --help
All plotting utilities live separately from experiments in plot
module.
For plotting help, run:
python3 -m plot --help