-
Notifications
You must be signed in to change notification settings - Fork 0
/
faq.html
32 lines (28 loc) · 5.01 KB
/
faq.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
layout: page
title: FAQ
background: '/img/bg-trojan.png'
---
<ol>
<li><b>Where do we download data and submit results?</b> See the <a href="start.html">Getting Started</a> page.</li>
<li><b>How many submissions can each team enter per competition track?</b> In each track, teams are restricted to 5 submissions per day on the validation set. For test sets, teams are restricted to 5 submissions total. Only one account per team can be used to submit results. Creating multiple accounts to circumvent the submission limits will result in disqualification.</li>
<li><b>Are participants required to share the details of their method?</b> We encourage all participants to share their methods and code, either with the organizers or publicly. To be eligible for prizes, winning teams are required to share their methods, code, and models with the organizers.</li>
<li><b>What are the current rules?</b> <a href="index.html#rules" style="text-decoration: underline;">Here</a>.</li>
<li><b>Can the organizers change the rules?</b> Yes. We require participants to consent to a change of rules if there is an urgent need. This is a new area and unanticipated developments may make it necessary for us to change the rules.</li>
<li><b>Can the first-place team in the Evasive Trojans Track also participate in the final round?</b> To avoid a possible unfair advantage, the first place team of the Evasive Trojans Track, whose models are used in the final round, will not be eligible for prizes in the final round. However, they may still participate in the leaderboard.</li>
<li><b>Is there a restriction on the number of clean examples that can be used by detection methods?</b> No. Submissions can use the entirety of the data sources (e.g., MNIST, CIFAR-10, etc.) for their detection methods. We do not view restrictions on the number of clean examples as highly relevant or realistic; but this is a new area for competitions, so if this is highly relevant we will find out.</li>
<li><b>Can I combine the datasets for the different tracks, e.g., to train a multitask method?</b> Yes.</li>
<li><b>What are the details for the Trojan Detection Track?</b> <a href="tracks.html#trojan-detection" style="text-decoration: underline;">Here</a>.</li>
<li><b>What are the details for the Trojan Analysis Track?</b> <a href="tracks.html#trojan-analysis" style="text-decoration: underline;">Here</a>.</li>
<li><b>What are the details for the Evasive Trojans Track?</b> <a href="tracks.html#evasive-trojans" style="text-decoration: underline;">Here</a>.</li>
<li><b>How do I contact the organizers?</b> Please feel free to contact us at <a href="mailto:[email protected]" style="text-decoration: underline;">[email protected]</a>.</li>
<li><b>Why are you using the baselines you have chosen?</b> Our baseline detectors (MNTD, Neural Cleanse, ABS) are well-known Trojan detectors from the academic literature, each with a distinct approach to Trojan detection. We also use a specificity-based detector as a baseline, since we find that Trojan attacks with low specificity can be highly susceptible to such a detector.</li>
<li><b>Why are you using the attack types you have chosen?</b> We use patch and whole-image attacks based on the well-known BadNets [<a href="#citation1" style="text-decoration: none;color: green">1</a>] and blended [<a href="#citation2" style="text-decoration: none;color: green">2</a>] attack strategies. To form the basis of our challenge, we modify these attacks to be harder to detect while still maintaining high attack success rates.</li>
<li><b>Why are you using the architectures you have chosen?</b> We use shallow ConvNets, Wide Residual Networks, and SimpleViT networks to cover a range of neural architectures.</li>
<li><b>What is the competition workshop?</b> Each NeurIPS 2022 competition has several hours allotted for a workshop specific to the competition. We will use this time to announce the winning teams for each track and describe the winning methods, takeaways, etc. For information on the upcoming competition workshop, see <a href="workshop.html" style="text-decoration: underline;">here</a>.</li>
<li><b>What is the publication summarizing the results?</b> After the conclusion of the competition, the winning teams will be invited to co-author a publication describing the competition, the winning methods, takeaways, etc. This publication will be in the NeurIPS 2022 proceedings.</li>
<li><b>When will prizes be distributed?</b> The winning teams will be announced in November 2022, after the organizers verify that the shared code and models of top submissions are legitimate. Prize money will be distributed as soon as possible after the winning teams are announced.</li>
</ol>
<hr>
<p id="citation1" style="margin : 0; padding-top:0;">1: "BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain". Gu et al.</p>
<p id="citation2" style="margin : 0; padding-top:0;">2: "Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning". Chen et al.</p>