Skip to content
This repository has been archived by the owner on Oct 3, 2024. It is now read-only.

Commit

Permalink
refactor
Browse files Browse the repository at this point in the history
  • Loading branch information
nkrusch committed Jan 6, 2024
1 parent b94f408 commit 35f7612
Show file tree
Hide file tree
Showing 7 changed files with 30 additions and 23 deletions.
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2023 Augusta University, Cyber Attack Detection
Copyright (c) 2023 AU Cyber Attack Detection

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
37 changes: 22 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,12 @@
This implementation demonstrates an approach to introduce constraints to unconstrained adversarial machine learning evasion attacks.
We develop a constraint validation algorithm, _Contraint Guaranteed Evasion_ (CGE), that guarantees generated evasive adversarial examples satisfy domain constraints.

The experimental setup allows running various adversarial evasion attacks, enhanced with CGE, on different data sets and victim classifiers.
Complete implementation of the CGE algorithm is in [`/cge`](https://github.com/aucad/cge/tree/main/cge) directory.
Examples of how to define constraints can be found, e.g., [here](https://github.com/aucad/cge/blob/main/config/iot23.yaml).
The constraints are converted to executable form using this [preprocessor](https://github.com/aucad/cge/blob/main/exp/preproc.py#L14-L27).
Examples showing how to integrate CGE into existing adversarial evasion attacks are [here](https://github.com/aucad/cge/blob/main/exp/hopskip.py#L26-L28) and [here](https://github.com/aucad/cge/blob/main/exp/pgd.py#L44) and [here](https://github.com/aucad/cge/blob/main/exp/zoo.py#L44).

This repository also includes an experimental setup for running various adversarial evasion attacks, enhanced with CGE, on different data sets and victim classifiers.
The following options are included.

- **Attacks**: Projected Gradient Descent (PGD), Zeroth-Order Optimization (ZOO), HopSkipJump attack.
Expand All @@ -15,6 +20,7 @@ The following options are included.

**Comparison.** We also include a comparison attack, _Constrained Projected Gradient Descent_ (C-PGD).
It uses a different constraint evaluation approach introduced by [Simonetto et al](https://arxiv.org/abs/2112.01156).
The C-PGD implementation is from [here](https://github.com/serval-uni-lu/constrained-attacks) and has its own, separate software license.

**Data sets**

Expand All @@ -36,25 +42,26 @@ It uses a different constraint evaluation approach introduced by [Simonetto et a

### Repository organization

| Directory | Description |
|:-------------|:--------------------------------------------------|
| `.github` | Automated workflows, development instructions |
| `cge` | CGE algorithm implementation |
| `comparison` | C-PGD attack implementation and its license |
| `config` | Experiment configuration files |
| `data` | Preprocessed input data sets |
| `exp` | Source code for running experiments |
| `plot` | Utilities for plotting experiment results |
| `ref_result` | Referential result for comparison |
| `test` | Unit tests |
| Directory | Description |
|:-------------|:----------------------------------------------|
| `.github` | Automated workflows, development instructions |
| `cge` | CGE algorithm implementation |
| `comparison` | C-PGD attack implementation and its license |
| `config` | Experiment configuration files |
| `data` | Preprocessed input data sets |
| `exp` | Source code for running experiments |
| `plot` | Plotting of experiment results |
| `ref_result` | Referential result for comparison |
| `test` | Unit tests |

- The Makefile contains pre-configured commands to ease running experiments.
- The `data/feature_*.csv` files are exclusively for use with C-PGD attack.
- All software dependencies are listed in `requirements.txt`.

## Experiment workflow

A single experiment consists of (1) preprocessing and setup (2) training a classification model on a choice data set (3) applying an adversarial attack to that model (4) scoring and (5) recording the result. A constraint-validation approach can be enabled or disabled during the attack to impact the validity of the generated adversarial examples.
A single experiment consists of (1) preprocessing and setup (2) training a classification model on a choice data set (3) applying an adversarial attack to that model (4) scoring and (5) recording the result.
A constraint-validation approach can be enabled or disabled during the attack to impact the validity of the generated adversarial examples.

<pre>
┌───────────────┐ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐
Expand All @@ -66,7 +73,7 @@ A single experiment consists of (1) preprocessing and setup (2) training a class
* other configs * init validation 3. score
</pre>

## Usage
## Reproducing paper experiments

**Software requirements**

Expand Down Expand Up @@ -117,7 +124,7 @@ make plots
make plots DIR=ref_result
```

## Extended/Custom usage
## Extended usage

The default experiment options are defined statically in `config` files.
An experiment run can be customized further with command line arguments, to override the static options.
Expand Down
2 changes: 1 addition & 1 deletion cge/types.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

class Validatable:
"""Base class for an attack with constraints"""
v_model = None
cge = None

def vhost(self):
"""validation model owner"""
Expand Down
6 changes: 3 additions & 3 deletions exp/attack.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ def reset(self, cls):
def can_validate(self):
return issubclass(self.attack, Validatable)

def run(self, v_model: Validation):
def run(self, cge: Validation):
"""Generate adversarial examples and score."""
self.start = time.time_ns()
if issubclass(self.attack, CPGD):
Expand All @@ -73,7 +73,7 @@ def run(self, v_model: Validation):
else:
aml_attack = self.attack(self.cls.classifier, **self.conf)
if self.can_validate:
aml_attack.vhost().v_model = v_model
aml_attack.vhost().cge = cge
self.adv_x = aml_attack.generate(x=self.ori_x)
self.adv_y = np.array(self.cls.predict(
self.adv_x, self.ori_y).flatten())
Expand All @@ -82,7 +82,7 @@ def run(self, v_model: Validation):
sys.stdout.write('\x1b[2K')

self.score.calculate(
self, v_model.constraints, v_model.scalars,
self, cge.constraints, cge.scalars,
dur=self.end - self.start)
return self

Expand Down
2 changes: 1 addition & 1 deletion exp/hopskip.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,6 @@ def _attack(
target, mask, clip_min, clip_max)

# adjust shape: 1d -> 2d -> 1d
return self.v_model.enforce(
return self.cge.enforce(
np.array([original_sample]),
np.array([x_adv]))[0] # NEW
2 changes: 1 addition & 1 deletion exp/pgd.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def _compute(
x, x_init, y, mask, eps, eps_step, project,
random_init, batch_id_ext, decay, momentum)

return self.v_model.enforce(x, x_adv) # NEW
return self.cge.enforce(x, x_adv) # NEW


class VPGD(ProjectedGradientDescent, Validatable):
Expand Down
2 changes: 1 addition & 1 deletion exp/zoo.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def _generate_batch(self, x_batch: np.ndarray, y_batch: np.ndarray) -> np.ndarra

# Run with 1 specific binary search step
best_dist, best_label, best_attack = self._generate_bss(x_batch, y_batch, c_current)
best_attack = self.v_model.enforce(x_batch, best_attack) # NEW!
best_attack = self.cge.enforce(x_batch, best_attack) # NEW!

# Update best results so far
o_best_attack[best_dist < o_best_dist] = best_attack[best_dist < o_best_dist]
Expand Down

0 comments on commit 35f7612

Please sign in to comment.