Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Carlini Linfinity attack #70

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

samuelemarro
Copy link
Contributor

This implementation is based on Carlini’s original Tensorflow implementation .
The main differences between the original and this one are:

  • Carlini’s implementation works on only one image at the time, while this one works on the whole batch
  • Carlini’s bounds of tau (initial_tau and min_tau) are hard-coded, while in this implementation they are passed as parameters
  • This implementation supports values outside the 0-1 range using clip_min and clip_max (like AdverTorch’s Carlini L2 implementation)
  • This implementation has optional warm start, while Carlini’s warm start is always enabled

All the default parameter values are based on Carlini’s implementation.

Unlike the Carlini L2 attack, this attack is quite sensitive to the choice of max_const: too low and the attack will fail, too high and the Linfinity distance will be ridiculous. See here for a comparison.
The cause is clear: Carlini's original implementation returns the last adversarial found, which is not necessarily the best one. The last adversarial is the one found with the highest value of const.

I could modify the code so that it returns the best adversarial, rather than the last, but this would mean that this implementation would be different from Carlini's code. I'll leave the decision up to the maintainers.

@gwding
Copy link
Collaborator

gwding commented Mar 10, 2020

@samuelemarro Thanks for the contribution!
I see this is still work in progress, so I guess I wait a bit before reviewing it?

Maybe it's in your plan already, just want to mention that it would also be nice if you could follow the guidelines for attacks specified in https://github.com/BorealisAI/advertorch/blob/master/CONTRIBUTING.md to add the tests.

@samuelemarro
Copy link
Contributor Author

I marked it as work in progress because I need to understand if you (as well as the other core maintainers) prefer to follow the original implementation (in this case, Carlini's implementation, which is very sensitive to max_const) or use an "improved" implementation (where increasing max_const does not lead to a drop in performance). Other than that, it's mostly ready.

I followed the contribution guidelines, did I miss anything? Sorry, it's my first major pull request to this repo.

@gwding
Copy link
Collaborator

gwding commented Mar 11, 2020

No worries :)
There're 2 more things to do, they are both pretty simple.

  1. the current code does not seem to pass flake8 test, see https://travis-ci.org/github/BorealisAI/advertorch/pull_requests for travis-ci testing results. Somehow the travis-ci status is not shown automatically in the PR.
  2. Benchmark the attack with at least one performance measure, by adding a script to advertorch_examples/attack_benchmarks . There are some examples in the folder already.

Re: max_const, I'd suggest return the best as default and having a flag for using to choose whether they want to choose the best or last. Does that make sense?
The performance of these two can be compared in the benchmark script.

Once these changes are in I'll review and make necessary changes to copyright notices and author's list.

@samuelemarro
Copy link
Contributor Author

Ok, I've added return_best and a benchmark that compares the attacks with and without return_best. Unfortunately, my computer is not powerful enough to run a proper benchmark, so I'll have to push without the benchmark results.

@samuelemarro samuelemarro marked this pull request as ready for review March 17, 2020 21:43
# during optimisation
prev_adversarials = x.clone()

while torch.any(active):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line doesn't work for torch 1.1 (works for torch 1.4). As in torch 1.1, torch.any only takes byte tensor, not bool tensor.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to fix right now, but please mention this in the comment. We'll implement testing under multiple torch versions later, and it can be fixed with other problems all together.


replace_to = active.clone()
replace_to[active] = filter_
to[replace_to] = from_[filter_]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little confused here. what would happen if to[replace_to] and from_[filter_] have different number of elements?

@@ -0,0 +1,64 @@
# Copyright (c) 2018-present, Royal Bank of Canada and other authors.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm running this on a single GPU machine, seems to be pretty slow. Any estimate on how long would this script run for?

@gwding
Copy link
Collaborator

gwding commented Aug 5, 2020

@samuelemarro Sorry for the super long delay in responding. I just found that I didn't "submit" my review, so they were "pending" and you cannot see them... for 4 months.

# during optimisation
prev_adversarials = x.clone()

while torch.any(active):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this while needed here? there already exists a loop in _run_attack. This loop may not stop and it seems to cause an infinite loop in the benchmarking test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants