Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting unstable neurons with alpha-beta crown #87

Open
Dalnaracrater opened this issue Jan 8, 2025 · 2 comments
Open

Getting unstable neurons with alpha-beta crown #87

Dalnaracrater opened this issue Jan 8, 2025 · 2 comments

Comments

@Dalnaracrater
Copy link

Hello,
Thank you for your dedication to the research community.

I have a question about obtaining unstable neurons in the context of verification research. Specifically, I noticed that some works mention authors identify unstable neurons using $\alpha\beta$ CROWN. However, I am unsure at which step these neurons are typically obtained.

Please correct me if I am wrong: For instance, is it during incomplete verification using $\alpha$ CROWN through LiRPA? Or during complete verification with $\alpha\beta$ CROWN (and if so, is it before or after the bound optimization process)?

Is there a commonly accepted context or step in the research field where unstable neurons are identified?

Thank you again for your insights

@shizhouxing
Copy link
Member

In general, unstable neurons can be identified by a bound computation method. In alpha-beta-CROWN, it's typically alpha-CROWN by default (CROWN/auto_LiRPA with bound optimization). But in some cases, this may be refined by MILP. By default, the complete verification process with branch-and-bound doesn't change unstable neurons except for neurons being branched.

Do you have a particular use case requiring unstable neurons? And what previous works ("authors identify unstable neurons using αβ-CROWN") were you referring to?

@Dalnaracrater
Copy link
Author

Thank you for your clarification!

I’ve read the paper “Linearity Grafting,” which mentions selecting unstable neurons to replace them with a linear function. I’m curious about how the authors identify these unstable neurons. Based on your explanation, it seems they rely on alpha CROWN (potentially refined using MILP). please correct me if I am wrong.

I also have a couple of follow-up questions:
Q1. I’d like to confirm whether the implementation of alpha CROWN in this Github code is the same as the alpha CROWN implemented in LiRPA.
Q2. Identifying unstable neurons for, say, 10,000 images seems quite time-consuming. Are there any methods to speed up this process? I tried to use this function

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants