-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
187 lines (166 loc) · 15.9 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
---
layout: page
title: The Trojan Detection Challenge
background: '/img/bg-trojan.png'
---
<p>This is the official website of the Trojan Detection Challenge, a NeurIPS 2022 competition. In this competition, we challenge you to detect and analyze Trojan attacks on deep neural networks that are <b>designed to be difficult to detect</b>. Neural Trojans are a growing concern for the security of ML systems, but little is known about the fundamental offense-defense balance of Trojan detection. Early work suggests that standard Trojan attacks may be easy to detect [<a href="#citation1" style="text-decoration: none;color: green">1</a>], but recently it has been shown that in simple cases one can design practically undetectable Trojans [<a href="#citation2" style="text-decoration: none;color: green">2</a>]. We invite you to help answer an important research question for deep neural networks: How hard is it to detect hidden functionality that is trying to stay hidden?</p>
<p><b>Prizes:</b> There is a <u>$50,000 prize pool.</u> The first-place teams will also be invited to co-author a publication summarizing the competition results and will be invited to give a short talk at the competition workshop at NeurIPS 2022 (registration provided). Our current planned procedures for distributing the pool are <a href="prizes.html">here</a>.</p>
<h4>News</h4>
<ul>
<li><b>February 4:</b>The workshop recording is available <a href="https://drive.google.com/file/d/1SjDy2NVVOakNf-SuwPPGILeXq-UBywp8/view?usp=sharing" style="text-decoration: underline;">here</a>. The validation phase annotations are available <a href="https://drive.google.com/drive/folders/1362q69BGJktPYKOXRt0fmp4eIlMi8-Ek?usp=share_link" style="text-decoration: underline;">here</a>.</li>
<li><b>November 27:</b> The final <a href="leaderboards.html" style="text-decoration: underline;">leaderboards</a> have been released.</li>
<li><b>November 4:</b> The competition has ended. For information on the upcoming competition workshop, see <a href="workshop.html" style="text-decoration: underline;">here</a>.</li>
<li><b>November 1:</b> The final round of the competition has begun.</li>
<li><b>October 16:</b> The test phase for tracks in the primary round of the competition has begun.</li>
<li><b>August 12:</b> Updated the rules to clarify that using batch statistics of the validation set is allowed.</li>
<li><b>July 27:</b> We now have a mailing list for receiving updates and reminders through email: <a href="https://groups.google.com/g/tdc-updates">https://groups.google.com/g/tdc-updates</a>. We will also continue to post updates here and on the competition pages.</li>
<li><b>July 25:</b> Updated the rules and the validation set for the detection track <a href="https://codalab.lisn.upsaclay.fr/forums/5951/810/" style="text-decoration: underline;">see here for details</a></li>
<li><b>July 18:</b> The validation phase for tracks in the primary round of the competition has begun.</li>
<li><b>July 15:</b> Due to server maintenance, the release of the training data and opening of evaluation servers for the validation sets is being moved forward three days to 7/18.</li>
</ul>
<details>
<summary><b>What are neural Trojans?</b></summary>
<p>Researchers have shown that adversaries can insert hidden functionality into deep neural networks such that networks behave normally most of the time but abruptly change their behavior when triggered by the adversary. This is known as a neural Trojan attack. Neural Trojans can be implanted through a variety of attack vectors. One such attack vector is by poisoning the dataset. For example, in the figure below the adversary has surreptitiously poisoned the training set of a classifier so that when a certain <i>trigger</i> is present, the classifier changes its prediction to the <i>target label</i>.</p>
<figure>
<img src="{{ '/img/trojan_figure_with_arrow.png' | relative_url }}" alt="" width="900">
<figcaption><b>Figure:</b> An example of a data poisoning Trojan attack. When the trigger is inserted into images at test time, the hidden functionality implanted by the adversary reveals itself and the network outputs the target label with high probability. When the trigger is not present, the network behaves normally. (<a href="https://arxiv.org/abs/1708.06733" style="text-decoration: underline;">figure credit</a>)</figcaption>
</figure>
<p>Many different kinds of Trojan attacks exist, including attacks with nearly invisible triggers that are hard to identify through manual inspection of a training set. However, even though the trigger may be invisible, it may still be fairly easy to detect the presence of a Trojan with purpose-built detectors. This is known as the problem of <i>Trojan detection</i>. A second kind of attack vector is to release Trojaned networks on model sharing libraries, such as Hugging Face or PyTorch Hub, allowing for even greater control over the inserted Trojans. For this competition, we leverage this attack vector to insert Trojans that are designed to be hard to detect.</p>
<p>For more information on neural Trojans, please see the lecture materials here: <a href="https://course.mlsafety.org/calendar/#monitoring" style="text-decoration: underline;">https://course.mlsafety.org/calendar/#monitoring</a></p>
</details>
<details>
<summary><b>Why participate?</b></summary>
<p>Neural Trojans are an important issue in the security of machine learning systems, and Trojan detection is the first line of defense. Thus, it is important to know whether the attacker or defender has an advantage in Trojan detection. A competition is a good way to figure this out. We expect to learn novel Trojan detection/construction strategies and likely some unanticipated phenomena. Additionally, a highly refined understanding of Trojan detection probably requires being able to understand what's going on inside a network on some level, and this is one of the great questions of our time.</p>
<p>As AI systems become more capable, the risks posed by hidden functionality may grow substantially. Developing tools and insights for detecting hidden functionality in modern AI systems could therefore lay important groundwork for tackling future risks. In particular, future AIs could potentially engage in forms of deception—not out of malice, but because deception can help agents achieve their goals or receive approval from humans. Once deceptive AI systems obtain sufficient leverage, these systems could take a "treacherous turn" and bypass human control. Neural Trojans are the closest modern analogue to the risk of treacherous turns in future AI systems and thus provide a microcosm for studying treacherous turns <a href="https://arxiv.org/pdf/2206.05862.pdf">source</a>.</p>
</details>
<h2>Overview</h2>
<p>How hard is neural network Trojan detection? Participants will help answer this question in three main tracks:</p>
<p>
<ol>
<li><b>Trojan Detection Track:</b> Given a dataset of Trojaned and clean networks spanning multiple data sources, build a Trojan detector that classifies a test set of networks with held-out labels (Trojan, clean). For more information, see <a href="tracks.html#trojan-detection" style="text-decoration: underline">here</a>.</li>
<li><b>Trojan Analysis Track:</b> Given a dataset of Trojaned networks spanning multiple data sources, predict various properties of Trojaned networks on a test set with held-out labels. This track has two subtracks: (1) target label prediction, (2) trigger synthesis. For more information, see <a href="tracks.html#trojan-analysis" style="text-decoration: underline;">here</a>.</li>
<li><b>Evasive Trojans Track:</b> Given a dataset of clean networks and a list of attack specifications, train a small set of Trojaned networks meeting the specifications and upload them to the evaluation server. The server will verify that the attack specifications are met, then train and evaluate a baseline Trojan detector using held-out clean networks and the submitted Trojaned networks. The task is to create Trojaned networks that are hard to detect. For more information, see <a href="tracks.html#evasive-trojans" style="text-decoration: underline;">here</a>.</li>
</ol>
</p>
<p>The competition has two rounds: In the primary round, participants will compete on the three main tracks. In the final round, the solution of the first-place team in the Evasive Trojans track will be used to train a new set of hard-to-detect Trojans, and participants will compete to detect these networks. For more information on the final round, see <a href="tracks.html#final-round">here</a>.</p>
<p><b>Compute Credits:</b> To enable broader participation, we are awarding $100 compute credit grants to student teams that would not otherwise be able to participate. For details on how to apply, see <a href="resources.html" style="text-decoration: underline;">here</a>.</p>
<h2>Important Dates</h2>
<ul>
<li><b>July 8:</b> Registration opens on CodaLab</li>
<li><b>July 17:</b> Training data released for the primary round. Evaluation servers open for the validation sets</li>
<li><b>October 15:</b> Evaluation servers open for the test sets</li>
<li><b>October 22:</b> Final submissions for the primary round</li>
---
<li><b>October 28:</b> Evaluation data released for the final round.</li>
<li><b>October 31:</b> Final submissions for the final round</li>
</ul>
<h2 id="rules">Rules</h2>
<ol>
<li><b>Open Format:</b> This is an open competition. All participants are encouraged to share their methods upon conclusion of the competition, and outstanding submissions will be highlighted in a joint publication. To be eligible for prizes, winning teams are required to share their methods, code, and models (at least with the organizers, although public releases are encouraged).</li>
<li><b>Registration:</b> Double registration is not allowed. We expect teams to self-certify that all team members are not part of a different team registered for the competition, and we will actively monitor for violation of this rule. Teams may participate in multiple tracks. Organizers are not allowed to participate in the competition or win prizes.</li>
<li><b>Prize Distribution:</b> Monetary prizes will be awarded to teams as specified in the <a href="prizes.html" style="text-decoration: underline;">Prizes page</a> of the competition website. To avoid a possible unfair advantage, the first place team of the Evasive Trojans Track, whose models are used in the final round, will not be eligible for prizes in the final round. However, they may still participate in the leaderboard.</li>
<li><b>Training Data:</b> Teams may only submit results of models trained on the provided training set of Trojaned and clean networks. An exception to this is that teams may use batch statistics of networks in the validation/test set to improve detection (e.g., clustering, pseudo-labeling). Training additional networks from scratch is not allowed, as it gives teams with more compute an unfair advantage. We expect teams to self-certify that they do not train additional training networks.</li>
<li><b>Detection Methods:</b> Augmentation of the provided dataset of neural networks is allowed as long as it does not involve training additional networks from scratch. Using inputs from the data sources (e.g., MNIST, CIFAR-10, etc.) is allowed. The use of features that are clearly loopholes is not allowed (e.g., differences in Python classes, batch norm num_batches_tracked statistics, or other discrepancies that would be easy to patch but may have been overlooked). As this is a new format of competition, we may not anticipate all loopholes and we encourage participants to alert us to their existence. Legitimate features that do not constitute loopholes include all features derived from the trained parameters of the networks.</li>
<li><b>Rule breaking</b> may result in disqualification, and significant rule breaking will result in an ineligibility for prizes.</li>
</ol>
<p>These rules are an initial set, and we require participants to consent
to a change of rules if there is an urgent need during registration. If a situation should arise that was not anticipated, we will implement a fair solution, ideally using consensus of participants.</p>
<h2>Organizers</h2>
<div class="wrapper">
<div class="grid">
<div class="grid-item">
<a href="https://scholar.google.com/citations?user=fGeEmLQAAAAJ&hl=en">
<figure>
<img src="{{ '/img/people/mantas.jpg' | relative_url }}" alt="">
<figcaption>Mantas Mazeika<br>(UIUC)</figcaption>
</figure>
</a>
</div>
<div class="grid-item">
<a href="https://people.eecs.berkeley.edu/~hendrycks/" target="_blank">
<figure>
<img src="{{ '/img/people/hendrycks.jpeg' | relative_url }}" alt="">
<figcaption>Dan Hendrycks<br>(UC Berkeley)</figcaption>
</figure>
</a>
</div>
<div class="grid-item">
<a href="http://huichenli.net/" target="_blank">
<figure>
<img src="{{ '/img/people/hli.jpeg' | relative_url }}" alt="">
<figcaption>Huichen Li<br>(UIUC)</figcaption>
</figure>
</a>
</div>
<div class="grid-item">
<a href="https://scholar.google.com/citations?user=rdMZZQwAAAAJ" target="_blank">
<figure>
<img src="{{ '/img/people/xu.jpeg' | relative_url }}" alt="">
<figcaption>Xiaojun Xu<br>(UIUC)</figcaption>
</figure>
</a>
</div>
<div class="grid-item">
<a href="https://shough.me/" target="_blank">
<figure>
<img src="{{ '/img/people/hough.png' | relative_url }}" alt="">
<figcaption>Sidney Hough<br>(Stanford)</figcaption>
</figure>
</a>
</div>
<div class="grid-item">
<a href="https://andyzoujm.github.io" target="_blank">
<figure>
<img src="{{ '/img/people/andyzou.png' | relative_url }}" alt="">
<figcaption>Andy Zou<br>(UC Berkeley)</figcaption>
</figure>
</a>
</div>
<div class="grid-item">
<a href="https://rajabia.github.io/" target="_blank">
<figure>
<img src="{{ '/img/people/rajabi.png' | relative_url }}" alt="">
<figcaption>Arezoo Rajabi<br>(UW)</figcaption>
</figure>
</a>
</div>
<div class="grid-item">
<a href="https://people.eecs.berkeley.edu/~dawnsong/" target="_blank">
<figure>
<img src="{{ '/img/people/song.jpeg' | relative_url }}" alt="">
<figcaption>Dawn Song<br>(UC Berkeley)</figcaption>
</figure>
</a>
</div>
<div class="grid-item">
<a href="https://people.ece.uw.edu/radha/index.html" target="_blank">
<figure>
<img src="{{ '/img/people/poovendran.jpeg' | relative_url }}" alt="">
<figcaption>Radha Poovendran<br>(UW)</figcaption>
</figure>
</a>
</div>
<div class="grid-item">
<a href="https://aisecure.github.io/" target="_blank">
<figure>
<img src="{{ '/img/people/bli.jpeg' | relative_url }}" alt="">
<figcaption>Bo Li<br>Amazon Scholar<br>(UIUC)</figcaption>
</figure>
</a>
</div>
<div class="grid-item">
<a href="http://luthuli.cs.uiuc.edu/~daf/">
<figure>
<img src="{{ '/img/people/forsyth.png' | relative_url }}" alt="">
<figcaption>David Forsyth<br>(UIUC)</figcaption>
</figure>
</a>
</div>
</div>
</div>
<p>Contact: <a href="mailto:[email protected]">[email protected]</a></p>
<p>To receive updates and reminders through email, join the tdc-updates google group: <a href="https://groups.google.com/g/tdc-updates">https://groups.google.com/g/tdc-updates</a>. Updates will also be posted to the website and competition pages.</p>
<p>We are kindly sponsored by Open Philanthropy.</p>
<hr>
<p id="citation1" style="margin : 0; padding-top:0;">1: "ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation". Liu et al.</p>
<p id="citation2" style="margin : 0; padding-top:0;">2: "Planting Undetectable Backdoors in Machine Learning Models". Goldwasser et al.</p>