Skip to content

Latest commit

 

History

History
329 lines (329 loc) · 148 KB

File metadata and controls

329 lines (329 loc) · 148 KB
Title Type Venue Code Year
0 Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem ⚔Attack 📝WSDM :octocat:Code 2022
1 Inference Attacks Against Graph Neural Networks ⚔Attack 📝USENIX Security :octocat:Code 2022
2 Model Stealing Attacks Against Inductive Graph Neural Networks ⚔Attack 📝IEEE Symposium on Security and Privacy :octocat:Code 2022
3 Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation ⚔Attack 📝WWW :octocat:Code 2022
4 Neighboring Backdoor Attacks on Graph Convolutional Network ⚔Attack 📝arXiv :octocat:Code 2022
5 Understanding and Improving Graph Injection Attack by Promoting Unnoticeability ⚔Attack 📝ICLR :octocat:Code 2022
6 Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs ⚔Attack 📝AAAI :octocat:Code 2022
7 More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks ⚔Attack 📝arXiv 2022
8 Interpretable and Effective Reinforcement Learning for Attacking against Graph-based Rumor Detection ⚔Attack 📝arXiv 2022
9 Projective Ranking-based GNN Evasion Attacks ⚔Attack 📝arXiv 2022
10 GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation ⚔Attack 📝arXiv 2022
11 Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization ⚔Attack 📝Asia CCS :octocat:Code 2022
12 Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees ⚔Attack 📝CVPR :octocat:Code 2022
13 Black-box Node Injection Attack for Graph Neural Networks ⚔Attack 📝arXiv :octocat:Code 2022
14 Stability and Generalization Capabilities of Message Passing Graph Neural Networks ⚖Stability 📝arXiv'2022 2022
15 On the Prediction Instability of Graph Neural Networks ⚖Stability 📝arXiv'2022 2022
16 GraphWar: A graph adversarial learning toolbox based on PyTorch and DGL ⚙Toolbox 📝arXiv’2022 :octocat:GraphWar 2022
17 A Comparative Study on Robust Graph Neural Networks to Structural Noises 📃Survey 📝AAAI DLG'2022 2022
18 Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack 📃Survey 📝arXiv'2022 2022
19 Graph Vulnerability and Robustness: A Survey 📃Survey 📝TKDE'2022 2022
20 A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability 📃Survey arXiv'2022 2022
21 Trustworthy Graph Neural Networks: Aspects, Methods and Trends 📃Survey arXiv'2022 2022
22 A Survey of Trustworthy Graph Learning: Reliability, Explainability, and Privacy Protection 📃Survey arXiv'2022 2022
23 Unsupervised Adversarially-Robust Representation Learning on Graphs 🛡Defense 📝AAAI :octocat:Code 2022
24 Towards Robust Graph Neural Networks for Noisy Graphs with Sparse Labels 🛡Defense 📝arXiv :octocat:Code 2022
25 Mind Your Solver! On Adversarial Attack and Defense for Combinatorial Optimization 🛡Defense 📝arXiv :octocat:Code 2022
26 Learning Robust Representation through Graph Adversarial Contrastive Learning 🛡Defense 📝arXiv 2022
27 GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks 🛡Defense 📝arXiv 2022
28 Graph Neural Network for Local Corruption Recovery 🛡Defense 📝arXiv :octocat:Code 2022
29 Robust Heterogeneous Graph Neural Networks against Adversarial Attacks 🛡Defense 📝AAAI 2022
30 How Does Bayesian Noisy Self-Supervision Defend Graph Convolutional Networks? 🛡Defense 📝Neural Processing Letters 2022
31 Defending Graph Convolutional Networks against Dynamic Graph Perturbations via Bayesian Self-supervision 🛡Defense 📝AAAI :octocat:Code 2022
32 SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation 🛡Defense 📝WWW :octocat:Code 2022
33 Exploring High-Order Structure for Robust Graph Structure Learning 🛡Defense 📝arXiv 2022
34 Detecting Topology Attacks against Graph Neural Networks 🛡Defense 📝arXiv 2022
35 LPGNet: Link Private Graph Networks for Node Classification 🛡Defense 📝arXiv 2022
36 GUARD: Graph Universal Adversarial Defense 🛡Defense 📝arXiv :octocat:Code 2022
37 How Members of Covert Networks Conceal the Identities of Their Leaders ⚔Attack 📝ACM TIST 2021
38 Task and Model Agnostic Adversarial Attack on Graph Neural Networks ⚔Attack 📝arXiv 2021
39 FHA: Fast Heuristic Attack Against Graph Convolutional Networks ⚔Attack 📝ICDS 2021
40 Adversarial Attack against Cross-lingual Knowledge Graph Alignment ⚔Attack 📝EMNLP 2021
41 Structural Attack against Graph Based Android Malware Detection ⚔Attack 📝CCS 2021
42 UNTANGLE: Unlocking Routing and Logic Obfuscation Using Graph Neural Networks-based Link Prediction ⚔Attack 📝ICCAD :octocat:Code 2021
43 VIKING: Adversarial Attack on Network Embeddings via Supervised Network Poisoning ⚔Attack 📝PAKDD :octocat:Code 2021
44 Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models ⚔Attack 📝arXiv :octocat:Code 2021
45 SAGE: Intrusion Alert-driven Attack Graph Extractor ⚔Attack 📝KDD Workshop :octocat:Code 2021
46 Universal Spectral Adversarial Attacks for Deformable Shapes ⚔Attack 📝CVPR 2021
47 Joint Detection and Localization of Stealth False Data Injection Attacks in Smart Grids using Graph Neural Networks ⚔Attack 📝arXiv 2021
48 GraphMI: Extracting Private Graph Data from Graph Neural Networks ⚔Attack 📝IJCAI :octocat:Code 2021
49 Adversarial Attack on Large Scale Graph ⚔Attack 📝TKDE :octocat:Code 2021
50 Adversarial Attack Framework on Graph Embedding Models with Limited Knowledge ⚔Attack 📝arXiv 2021
51 TDGIA: Effective Injection Attacks on Graph Neural Networks ⚔Attack 📝KDD :octocat:Code 2021
52 Graph Backdoor ⚔Attack 📝USENIX Security 2021
53 BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection ⚔Attack 📝arXiv 2021
54 Membership Inference Attack on Graph Neural Networks ⚔Attack 📝arXiv 2021
55 Graph Adversarial Attack via Rewiring ⚔Attack 📝KDD :octocat:Code 2021
56 GReady for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack ⚔Attack 📝Information Sciences 2021
57 Optimal Edge Weight Perturbations to Attack Shortest Paths ⚔Attack 📝arXiv 2021
58 Structack: Structure-based Adversarial Attacks on Graph Neural Networks ⚔Attack 📝ACM Hypertext :octocat:Code 2021
59 PATHATTACK: Attacking Shortest Paths in Complex Networks ⚔Attack 📝arXiv 2021
60 Explainability-based Backdoor Attacks Against Graph Neural Networks ⚔Attack 📝WiseML@WiSec 2021
61 Stealing Links from Graph Neural Networks ⚔Attack 📝USENIX Security 2021
62 GraphAttacker: A General Multi-Task GraphAttack Framework ⚔Attack 📝arXiv :octocat:Code 2021
63 Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense ⚔Attack 📝arXiv 2021
64 Node-Level Membership Inference Attacks Against Graph Neural Networks ⚔Attack 📝arXiv 2021
65 COREATTACK: Breaking Up the Core Structure of Graphs ⚔Attack 📝arXiv 2021
66 Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods ⚔Attack 📝EMNLP :octocat:Code 2021
67 Adversarial Attacks on Graph Classification via Bayesian Optimisation ⚔Attack 📝NeurIPS :octocat:Code 2021
68 Graph Universal Adversarial Attacks: A Few Bad Actors Ruin Graph Learning Models ⚔Attack 📝IJCAI :octocat:Code 2021
69 Graph Structural Attack by Spectral Distance ⚔Attack 📝arXiv 2021
70 Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness ⚔Attack 📝NeurIPS 2021
71 Robustness of Graph Neural Networks at Scale ⚔Attack 📝NeurIPS :octocat:Code 2021
72 Attacking Graph Neural Networks at Scale ⚔Attack 📝AAAI workshop 2021
73 Surrogate Representation Learning with Isometric Mapping for Gray-box Graph Adversarial Attacks ⚔Attack 📝arXiv 2021
74 Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications ⚔Attack 📝ICDM :octocat:Code 2021
75 Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based Vertical Federated Learning ⚔Attack 📝arXiv 2021
76 Time-aware Gradient Attack on Dynamic Network Link Prediction ⚔Attack 📝TKDE 2021
77 Query-based Adversarial Attacks on Graph with Fake Nodes ⚔Attack 📝arXiv 2021
78 Projective Ranking: A Transferable Evasion Attack Method on Graph Neural Networks ⚔Attack 📝CIKM 2021
79 Derivative-free optimization adversarial attacks for graph convolutional networks ⚔Attack 📝PeerJ 2021
80 Watermarking Graph Neural Networks based on Backdoor Attacks ⚔Attack 📝arXiv 2021
81 Single Node Injection Attack against Graph Neural Networks ⚔Attack 📝CIKM :octocat:Code 2021
82 Spatially Focused Attack against Spatiotemporal Graph Neural Networks ⚔Attack 📝arXiv 2021
83 Reinforcement Learning For Data Poisoning on Graph Neural Networks ⚔Attack 📝arXiv 2021
84 Graphfool: Targeted Label Adversarial Attack on Graph Embedding ⚔Attack 📝arXiv 2021
85 Towards Revealing Parallel Adversarial Attack on Politician Socialnet of Graph Structure ⚔Attack 📝Security and Communication Networks 2021
86 Network Embedding Attack: An Euclidean Distance Based Method ⚔Attack 📝MDATA 2021
87 Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation ⚔Attack 📝arXiv 2021
88 Jointly Attacking Graph Neural Network and its Explanations ⚔Attack 📝arXiv 2021
89 DeHiB: Deep Hidden Backdoor Attack on Semi-Supervised Learning via Adversarial Perturbation ⚔Attack 📝AAAI 2021
90 Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings ⚔Attack 📝arXiv :octocat:Code 2021
91 Single-Node Attack for Fooling Graph Neural Networks ⚔Attack 📝KDD Workshop :octocat:Code 2021
92 The Robustness of Graph k-shell Structure under Adversarial Attacks ⚔Attack 📝arXiv 2021
93 Poisoning Knowledge Graph Embeddings via Relation Inference Patterns ⚔Attack 📝ACL :octocat:Code 2021
94 A Hard Label Black-box Adversarial Attack Against Graph Neural Networks ⚔Attack 📝CCS 2021
95 GNNUnlock: Graph Neural Networks-based Oracle-less Unlocking Scheme for Provably Secure Logic Locking ⚔Attack 📝DATE Conference 2021
96 Graph Stochastic Neural Networks for Semi-supervised Learning ⚔Attack 📝arXiv :octocat:Code 2021
97 Towards a Unified Framework for Fair and Stable Graph Representation Learning ⚖Stability 📝UAI'2021 :octocat:Code 2021
98 Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training data ⚖Stability 📝arXiv'2021 2021
99 Stability of Graph Convolutional Neural Networks to Stochastic Perturbations ⚖Stability 📝arXiv'2021 2021
100 Training Stable Graph Neural Networks Through Constrained Learning ⚖Stability 📝arXiv'2021 2021
101 DeepRobust: a Platform for Adversarial Attacks and Defenses ⚙Toolbox 📝AAAI’2021 :octocat:DeepRobust 2021
102 Evaluating Graph Vulnerability and Robustness using TIGER ⚙Toolbox 📝arXiv‘2021 :octocat:TIGER 2021
103 Graph Robustness Benchmark: Rethinking and Benchmarking Adversarial Robustness of Graph Neural Networks ⚙Toolbox 📝NeurIPS Openreview’2021 :octocat:Graph Robustness Benchmark (GRB) 2021
104 Deep Graph Structure Learning for Robust Representations: A Survey 📃Survey 📝arXiv'2021 2021
105 Robustness of deep learning models on graphs: A survey 📃Survey 📝AI Open'2021 2021
106 Graph Neural Networks Methods, Applications, and Opportunities 📃Survey 📝arXiv'2021 2021
107 Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies 📃Survey 📝SIGKDD Explorations'2021 2021
108 Robust Certification for Laplace Learning on Geometric Graphs 🔐Certification 📝MSML’2021 2021
109 Certifying Robustness of Graph Laplacian Based Semi-Supervised Learning 🔐Certification 📝ICLR OpenReview'2021 2021
110 Adversarial Immunization for Improving Certifiable Robustness on Graphs 🔐Certification 📝WSDM'2021 2021
111 Collective Robustness Certificates: Exploiting Interdependence in Graph Neural Networks 🔐Certification 📝ICLR'2021 :octocat:Code 2021
112 Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation 🔐Certification 📝KDD'2021 :octocat:Code 2021
113 CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks 🚀Others 📝arXiv'2021 2021
114 SIGL: Securing Software Installations Through Deep Graph Learning 🚀Others 📝USENIX'2021 2021
115 A Robust and Generalized Framework for Adversarial Graph Embedding 🛡Defense 📝arXiv :octocat:Code 2021
116 Learning to Drop: Robust Graph Neural Network via Topological Denoising 🛡Defense 📝WSDM :octocat:Code 2021
117 How effective are Graph Neural Networks in Fraud Detection for Network Data? 🛡Defense 📝arXiv 2021
118 An Introduction to Robust Graph Convolutional Networks 🛡Defense 📝arXiv 2021
119 E-GraphSAGE: A Graph Neural Network based Intrusion Detection System 🛡Defense 📝arXiv 2021
120 Spatio-Temporal Sparsification for General Robust Graph Convolution Networks 🛡Defense 📝arXiv 2021
121 Robust graph convolutional networks with directional graph adversarial training 🛡Defense 📝Applied Intelligence 2021
122 Detection and Defense of Topological Adversarial Attacks on Graphs 🛡Defense 📝AISTATS 2021
123 Unveiling the potential of Graph Neural Networks for robust Intrusion Detection 🛡Defense 📝arXiv :octocat:Code 2021
124 Adversarial Robustness of Probabilistic Network Embedding for Link Prediction 🛡Defense 📝arXiv 2021
125 EGC2: Enhanced Graph Classification with Easy Graph Compression 🛡Defense 📝arXiv 2021
126 LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis 🛡Defense 📝arXiv 2021
127 Structure-Aware Hierarchical Graph Pooling using Information Bottleneck 🛡Defense 📝IJCNN 2021
128 Mal2GCN: A Robust Malware Detection Approach Using Deep Graph Convolutional Networks With Non-Negative Weights 🛡Defense 📝arXiv 2021
129 CoG: a Two-View Co-training Framework for Defending Adversarial Attacks on Graph 🛡Defense 📝arXiv 2021
130 Releasing Graph Neural Networks with Differential Privacy Guarantees 🛡Defense 📝arXiv 2021
131 Speedup Robust Graph Structure Learning with Low-Rank Information 🛡Defense 📝CIKM 2021
132 A Lightweight Metric Defence Strategy for Graph Neural Networks Against Poisoning Attacks 🛡Defense 📝ICICS :octocat:Code 2021
133 Node Feature Kernels Increase Graph Convolutional Network Robustness 🛡Defense 📝arXiv :octocat:Code 2021
134 On the Relationship between Heterophily and Robustness of Graph Neural Networks 🛡Defense 📝arXiv 2021
135 Unified Robust Training for Graph NeuralNetworks against Label Noise 🛡Defense 📝arXiv 2021
136 Randomized Generation of Adversary-Aware Fake Knowledge Graphs to Combat Intellectual Property Theft 🛡Defense 📝AAAI 2021
137 Interpretable Stability Bounds for Spectral Graph Filters 🛡Defense 📝arXiv 2021
138 Personalized privacy protection in social networks through adversarial modeling 🛡Defense 📝AAAI 2021
139 Integrated Defense for Resilient Graph Matching 🛡Defense 📝ICML 2021
140 Unveiling Anomalous Nodes Via Random Sampling and Consensus on Graphs 🛡Defense 📝ICASSP 2021
141 Robust Network Alignment via Attack Signal Scaling and Adversarial Perturbation Elimination 🛡Defense 📝WWW 2021
142 Information Obfuscation of Graph Neural Network 🛡Defense 📝ICML :octocat:Code 2021
143 Improving Robustness of Graph Neural Networks with Heterophily-Inspired Designs 🛡Defense 📝arXiv 2021
144 On Generalization of Graph Autoencoders with Adversarial Training 🛡Defense 📝ECML 2021
145 DeepInsight: Interpretability Assisting Detection of Adversarial Samples on Graphs 🛡Defense 📝ECML 2021
146 Graph Sanitation with Application to Node Classification 🛡Defense 📝arXiv 2021
147 Distributionally Robust Semi-Supervised Learning Over Graphs 🛡Defense 📝ICLR 2021
148 Robust Counterfactual Explanations on Graph Neural Networks 🛡Defense 📝arXiv 2021
149 Enhancing Robustness and Resilience of Multiplex Networks Against Node-Community Cascading Failures 🛡Defense 📝IEEE TSMC 2021
150 NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data 🛡Defense 📝TKDE :octocat:Code 2021
151 Robust Graph Learning Under Wasserstein Uncertainty 🛡Defense 📝arXiv 2021
152 Towards Robust Graph Contrastive Learning 🛡Defense 📝arXiv 2021
153 Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks 🛡Defense 📝ICML 2021
154 UAG: Uncertainty-Aware Attention Graph Neural Network for Defending Adversarial Attacks 🛡Defense 📝AAAI 2021
155 Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning Attacks 🛡Defense 📝AAAI 2021
156 Power up! Robust Graph Convolutional Network against Evasion Attacks based on Graph Powering 🛡Defense 📝AAAI :octocat:Code 2021
157 Node Similarity Preserving Graph Convolutional Networks 🛡Defense 📝WSDM :octocat:Code 2021
158 Graph Transplant: Node Saliency-Guided Graph Mixup with Local Structure Preservation 🛡Defense 📝arXiv 2021
159 Not All Low-Pass Filters are Robust in Graph Convolutional Networks 🛡Defense 📝NeurIPS :octocat:Code 2021
160 Towards Robust Reasoning over Knowledge Graphs 🛡Defense 📝arXiv 2021
161 Understanding Structural Vulnerability in Graph Convolutional Networks 🛡Defense 📝IJCAI :octocat:Code 2021
162 Robust Graph Neural Networks via Probabilistic Lipschitz Constraints 🛡Defense 📝arXiv 2021
163 Graph Neural Networks with Adaptive Residual 🛡Defense NeurIPS :octocat:Code 2021
164 Graph-based Adversarial Online Kernel Learning with Adaptive Embedding 🛡Defense 📝ICDM 2021
165 Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classification 🛡Defense 📝NeurIPS :octocat:Code 2021
166 Graph Neural Networks with Feature and Structure Aware Random Walk 🛡Defense 📝arXiv 2021
167 Topological Relational Learning on Graphs 🛡Defense 📝NeurIPS :octocat:Code 2021
168 Elastic Graph Neural Networks 🛡Defense 📝ICML :octocat:Code 2021
169 Adversarial Attacks on Graph Neural Networks: Perturbations and their Patterns ⚔Attack 📝TKDD 2020
170 Practical Adversarial Attacks on Graph Neural Networks ⚔Attack 📝ICML Workshop 2020
171 An Efficient Adversarial Attack on Graph Structured Data ⚔Attack 📝IJCAI Workshop 2020
172 Adversarial Attacks on Graph Neural Networks via Node Injections: A Hierarchical Reinforcement Learning Approach ⚔Attack 📝WWW 2020
173 Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks ⚔Attack 📝BigData 2020
174 A Restricted Black-box Adversarial Framework Towards Attacking Graph Embedding Models ⚔Attack 📝AAAI :octocat:Code 2020
175 Manipulating Node Similarity Measures in Networks ⚔Attack 📝AAMAS 2020
176 Adversarial Attack on Community Detection by Hiding Individuals ⚔Attack 📝WWW :octocat:Code 2020
177 Adversarial Attack on Hierarchical Graph Pooling Neural Networks ⚔Attack 📝arXiv 2020
178 Link Prediction Adversarial Attack Via Iterative Gradient Attack ⚔Attack 📝IEEE Trans 2020
179 Backdoor Attacks to Graph Neural Networks ⚔Attack 📝SACMAT :octocat:Code 2020
180 Efficient Evasion Attacks to Graph Neural Networks via Influence Function ⚔Attack 📝arXiv 2020
181 Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs ⚔Attack 📝arXiv 2020
182 Query-free Black-box Adversarial Attacks on Graphs ⚔Attack 📝arXiv 2020
183 Adversarial Attacks on Link Prediction Algorithms Based on Graph Neural Networks ⚔Attack 📝Asia CCS 2020
184 A Targeted Universal Attack on Graph Convolutional Network ⚔Attack 📝arXiv :octocat:Code 2020
185 Adversarial Label-Flipping Attack and Defense for Graph Neural Networks ⚔Attack 📝ICDM :octocat:Code 2020
186 Towards More Practical Adversarial Attacks on Graph Neural Networks ⚔Attack 📝NeurIPS :octocat:Code 2020
187 Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation ⚔Attack 📝ICLR :octocat:Code 2020
188 Cross Entropy Attack on Deep Graph Infomax ⚔Attack 📝IEEE ISCAS 2020
189 Attacking Graph-Based Classification without Changing Existing Connections ⚔Attack 📝ACSAC 2020
190 Adversarial Attacks on Deep Graph Matching ⚔Attack 📝NeurIPS 2020
191 Near-Black-Box Adversarial Attacks on Graph Neural Networks as An Influence Maximization Problem ⚔Attack 📝ICLR OpenReview 2020
192 One Vertex Attack on Graph Neural Networks-based Spatiotemporal Forecasting ⚔Attack 📝ICLR OpenReview 2020
193 Scalable Adversarial Attack on Graph Neural Networks with Alternating Direction Method of Multipliers ⚔Attack 📝arXiv 2020
194 Semantic-preserving Reinforcement Learning Attack Against Graph Neural Networks for Malware Detection ⚔Attack 📝arXiv 2020
195 A Graph Matching Attack on Privacy-Preserving Record Linkage ⚔Attack 📝CIKM 2020
196 Exploratory Adversarial Attacks on Graph Neural Networks ⚔Attack 📝ICDM :octocat:Code 2020
197 Scalable Attack on Graph Data by Injecting Vicious Nodes ⚔Attack 📝ECML-PKDD :octocat:Code 2020
198 Attackability Characterization of Adversarial Evasion Attack on Discrete Data ⚔Attack 📝KDD 2020
199 MGA: Momentum Gradient Attack on Network ⚔Attack 📝arXiv 2020
200 Adversarial Attacks to Scale-Free Networks: Testing the Robustness of Physical Criteria ⚔Attack 📝arXiv 2020
201 Adversarial Perturbations of Opinion Dynamics in Networks ⚔Attack 📝arXiv 2020
202 Network disruption: maximizing disagreement and polarization in social networks ⚔Attack 📝arXiv :octocat:Code 2020
203 Adversarial attack on BC classification for scale-free networks ⚔Attack 📝AIP Chaos 2020
204 Adaptive Adversarial Attack on Graph Embedding via GAN ⚔Attack 📝SocialSec 2020
205 On the Stability of Graph Convolutional Neural Networks under Edge Rewiring ⚖Stability 📝arXiv'2020 2020
206 Stability of Graph Neural Networks to Relative Perturbations ⚖Stability 📝ICASSP'2020 2020
207 Graph Neural Networks: Architectures, Stability and Transferability ⚖Stability 📝arXiv'2020 2020
208 Should Graph Convolution Trust Neighbors? A Simple Causal Inference Method ⚖Stability 📝arXiv'2020 2020
209 Graph and Graphon Neural Network Stability ⚖Stability 📝arXiv'2020 2020
210 A Survey of Adversarial Learning on Graph 📃Survey 📝arXiv'2020 2020
211 Graph Neural Networks Taxonomy, Advances and Trends 📃Survey 📝arXiv'2020 2020
212 Improving the Robustness of Wasserstein Embedding by Adversarial PAC-Bayesian Learning 🔐Certification 📝AAAI'2020 2020
213 Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks 🔐Certification 📝NeurIPS'2020 :octocat:Code 2020
214 Certifiable Robustness of Graph Convolutional Networks under Structure Perturbation 🔐Certification 📝KDD'2020 :octocat:Code 2020
215 Efficient Robustness Certificates for Discrete Data: Sparsity - Aware Randomized Smoothing for Graphs, Images and More 🔐Certification 📝ICML'2020 :octocat:Code 2020
216 Abstract Interpretation based Robustness Certification for Graph Convolutional Networks 🔐Certification 📝ECAI'2020 2020
217 Certified Robustness of Graph Classification against Topology Attack with Randomized Smoothing 🔐Certification 📝GLOBECOM'2020 2020
218 Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing 🔐Certification 📝WWW'2020 2020
219 FLAG: Adversarial Data Augmentation for Graph Neural Networks 🚀Others 📝arXiv'2020 :octocat:Code 2020
220 Watermarking Graph Neural Networks by Random Graphs 🚀Others 📝arXiv'2020 2020
221 Training Robust Graph Neural Network by Applying Lipschitz Constant Constraint 🚀Others 📝CentraleSupélec'2020 :octocat:Code 2020
222 When Does Self-Supervision Help Graph Convolutional Networks? 🚀Others 📝ICML'2020 2020
223 Graph-Revised Convolutional Network 🛡Defense 📝ECML-PKDD :octocat:Code 2020
224 Topological Effects on Attacks Against Vertex Classification 🛡Defense 📝arXiv 2020
225 Tensor Graph Convolutional Networks for Multi-relational and Robust Learning 🛡Defense 📝arXiv 2020
226 DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder 🛡Defense 📝arXiv :octocat:Code 2020
227 Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning 🛡Defense 📝arXiv 2020
228 AANE: Anomaly Aware Network Embedding For Anomalous Link Detection 🛡Defense 📝ICDM 2020
229 Provably Robust Node Classification via Low-Pass Message Passing 🛡Defense 📝ICDM 2020
230 Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged Fraudsters 🛡Defense 📝CIKM :octocat:Code 2020
231 Robust Collective Classification against Structural Attacks 🛡Defense 📝Preprint 2020
232 Robust Training of Graph Convolutional Networks via Latent Perturbation 🛡Defense 📝ECML-PKDD 2020
233 Robust Graph Representation Learning via Neural Sparsification 🛡Defense 📝ICML 2020
234 I-GCN: Robust Graph Convolutional Network via Influence Mechanism 🛡Defense 📝arXiv 2020
235 Adversary for Social Good: Protecting Familial Privacy through Joint Adversarial Attacks 🛡Defense 📝AAAI 2020
236 Smoothing Adversarial Training for GNN 🛡Defense 📝IEEE TCSS 2020
237 Graph Structure Reshaping Against Adversarial Attacks on Graph Neural Networks 🛡Defense 📝None :octocat:Code 2020
238 RoGAT: a robust GNN combined revised GAT with adjusted graphs 🛡Defense 📝arXiv 2020
239 ResGCN: Attention-based Deep Residual Modeling for Anomaly Detection on Attributed Networks 🛡Defense 📝arXiv 2020
240 Adversarial Privacy Preserving Graph Embedding against Inference Attack 🛡Defense 📝arXiv :octocat:Code 2020
241 Robust Graph Learning From Noisy Data 🛡Defense 📝IEEE Trans 2020
242 Learning Graph Embedding with Adversarial Training Methods 🛡Defense 📝IEEE Transactions on Cybernetics 2020
243 GNNGuard: Defending Graph Neural Networks against Adversarial Attacks 🛡Defense 📝NeurIPS :octocat:Code 2020
244 All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs 🛡Defense 📝WSDM :octocat:Code 2020
245 How Robust Are Graph Neural Networks to Structural Noise? 🛡Defense 📝DLGMA 2020
246 Robust Detection of Adaptive Spammers by Nash Reinforcement Learning 🛡Defense 📝KDD :octocat:Code 2020
247 Graph Structure Learning for Robust Graph Neural Networks 🛡Defense 📝KDD :octocat:Code 2020
248 On The Stability of Polynomial Spectral Graph Filters 🛡Defense 📝ICASSP :octocat:Code 2020
249 On the Robustness of Cascade Diffusion under Node Attacks 🛡Defense 📝WWW :octocat:Code 2020
250 Friend or Faux: Graph-Based Early Detection of Fake Accounts on Social Networks 🛡Defense 📝WWW 2020
251 Towards an Efficient and General Framework of Robust Training for Graph Neural Networks 🛡Defense 📝ICASSP 2020
252 Transferring Robustness for Graph Neural Network Against Poisoning Attacks 🛡Defense 📝WSDM :octocat:Code 2020
253 Graph Contrastive Learning with Augmentations 🛡Defense 📝NeurIPS :octocat:Code 2020
254 Graph Information Bottleneck 🛡Defense 📝NeurIPS :octocat:Code 2020
255 Ricci-GNN: Defending Against Structural Attacks Through a Geometric Approach 🛡Defense 📝ICLR OpenReview 2020
256 Provable Overlapping Community Detection in Weighted Graphs 🛡Defense 📝NeurIPS 2020
257 Adversarial Detection on Graph Structured Data 🛡Defense 📝PPMLP 2020
258 Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings 🛡Defense 📝NeurIPS :octocat:Code 2020
259 Reliable Graph Neural Networks via Robust Aggregation 🛡Defense 📝NeurIPS :octocat:Code 2020
260 Towards Robust Graph Neural Networks against Label Noise 🛡Defense 📝ICLR OpenReview 2020
261 Graph Adversarial Networks: Protecting Information against Adversarial Attacks 🛡Defense 📝ICLR OpenReview :octocat:Code 2020
262 A Novel Defending Scheme for Graph-Based Classification Against Graph Structure Manipulating Attack 🛡Defense 📝SocialSec 2020
263 Node Copying for Protection Against Graph Neural Network Topology Attacks 🛡Defense 📝arXiv 2020
264 Community detection in sparse time-evolving graphs with a dynamical Bethe-Hessian 🛡Defense 📝NeurIPS 2020
265 A Feature-Importance-Aware and Robust Aggregator for GCN 🛡Defense 📝CIKM :octocat:Code 2020
266 Anti-perturbation of Online Social Networks by Graph Label Transition 🛡Defense 📝arXiv 2020
267 Graph Random Neural Networks for Semi-Supervised Learning on Graphs 🛡Defense 📝NeurIPS :octocat:Code 2020
268 Attacking Graph-based Classification via Manipulating the Graph Structure ⚔Attack 📝CCS 2019
269 A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning ⚔Attack 📝NeurIPS :octocat:Code 2019
270 Adversarial Examples on Graph Data: Deep Insights into Attack and Defense ⚔Attack 📝IJCAI :octocat:Code 2019
271 Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective ⚔Attack 📝IJCAI :octocat:Code 2019
272 Adversarial Attacks on Graph Neural Networks via Meta Learning ⚔Attack 📝ICLR :octocat:Code 2019
273 Data Poisoning Attack against Knowledge Graph Embedding ⚔Attack 📝IJCAI 2019
274 GA Based Q-Attack on Community Detection ⚔Attack 📝TCSS 2019
275 Attacking Graph Convolutional Networks via Rewiring ⚔Attack 📝arXiv 2019
276 Unsupervised Euclidean Distance Attack on Network Embedding ⚔Attack 📝arXiv 2019
277 Structured Adversarial Attack Towards General Implementation and Better Interpretability ⚔Attack 📝ICLR :octocat:Code 2019
278 Vertex Nomination, Consistent Estimation, and Adversarial Modification ⚔Attack 📝arXiv 2019
279 PeerNets Exploiting Peer Wisdom Against Adversarial Attacks ⚔Attack 📝ICLR :octocat:Code 2019
280 Network Structural Vulnerability A Multi-Objective Attacker Perspective ⚔Attack 📝IEEE Trans 2019
281 Multiscale Evolutionary Perturbation Attack on Community Detection ⚔Attack 📝arXiv 2019
282 αCyber: Enhancing Robustness of Android Malware Detection System against Adversarial Attacks on Heterogeneous Graph based Model ⚔Attack 📝CIKM 2019
283 Adversarial Attacks on Node Embeddings via Graph Poisoning ⚔Attack 📝ICML :octocat:Code 2019
284 Generalizable Adversarial Attacks with Latent Variable Perturbation Modelling ⚔Attack 📝arXiv 2019
285 When Do GNNs Work: Understanding and Improving Neighborhood Aggregation ⚖Stability 📝IJCAI Workshop'2019 :octocat:Code 2019
286 Stability Properties of Graph Neural Networks ⚖Stability 📝arXiv'2019 2019
287 Stability and Generalization of Graph Convolutional Neural Networks ⚖Stability 📝KDD'2019 2019
288 Adversarial Attacks and Defenses in Images, Graphs and Text: A Review 📃Survey 📝arXiv'2019 2019
289 Certifiable Robustness and Robust Training for Graph Convolutional Networks 🔐Certification 📝KDD'2019 :octocat:Code 2019
290 Certifiable Robustness to Graph Perturbations 🔐Certification 📝NeurIPS'2019 :octocat:Code 2019
291 Perturbation Sensitivity of GNNs 🚀Others 📝cs224w'2019 2019
292 Adversarial Training Methods for Network Embedding 🛡Defense 📝WWW :octocat:Code 2019
293 Improving Robustness to Attacks Against Vertex Classification 🛡Defense 📝MLG@KDD 2019
294 Can Adversarial Network Attack be Defended? 🛡Defense 📝arXiv 2019
295 Adversarial Robustness of Similarity-Based Link Prediction 🛡Defense 📝ICDM 2019
296 GraphDefense: Towards Robust Graph Convolutional Networks 🛡Defense 📝arXiv 2019
297 GraphSAC: Detecting anomalies in large-scale graphs 🛡Defense 📝arXiv 2019
298 Edge Dithering for Robust Adaptive Graph Convolutional Networks 🛡Defense 📝arXiv 2019
299 Batch Virtual Adversarial Training for Graph Convolutional Networks 🛡Defense 📝ICML :octocat:Code 2019
300 Bayesian graph convolutional neural networks for semi-supervised classification 🛡Defense 📝AAAI :octocat:Code 2019
301 Target Defense Against Link-Prediction-Based Attacks via Evolutionary Perturbations 🛡Defense 📝arXiv 2019
302 Examining Adversarial Learning against Graph-based IoT Malware Detection Systems 🛡Defense 📝arXiv 2019
303 Adversarial Embedding: A robust and elusive Steganography and Watermarking technique 🛡Defense 📝arXiv 2019
304 Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning 🛡Defense 📝arXiv :octocat:Code 2019
305 Adversarial Defense Framework for Graph Neural Network 🛡Defense 📝arXiv 2019
306 Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure 🛡Defense 📝TKDE :octocat:Code 2019
307 Latent Adversarial Training of Graph Convolution Networks 🛡Defense 📝LRGSD@ICML :octocat:Code 2019
308 Comparing and Detecting Adversarial Attacks for Graph Deep Learning 🛡Defense 📝RLGM@ICLR 2019
309 Characterizing Malicious Edges targeting on Graph Neural Networks 🛡Defense 📝ICLR OpenReview :octocat:Code 2019
310 Robust Graph Data Learning via Latent Graph Convolutional Representation 🛡Defense 📝arXiv 2019
311 Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications 🛡Defense 📝NAACL :octocat:Code 2019
312 Robust Graph Convolutional Networks Against Adversarial Attacks 🛡Defense 📝KDD :octocat:Code 2019
313 Virtual Adversarial Training on Graph Convolutional Networks in Node Classification 🛡Defense 📝PRCV 2019
314 Adversarial Attack on Graph Structured Data ⚔Attack 📝ICML :octocat:Code 2018
315 Attacking Similarity-Based Link Prediction in Social Networks ⚔Attack 📝AAMAS 2018
316 Hiding Individuals and Communities in a Social Network ⚔Attack 📝Nature Human Behavior 2018
317 Adversarial Attacks on Neural Networks for Graph Data ⚔Attack 📝KDD :octocat:Code 2018
318 Attack Tolerance of Link Prediction Algorithms: How to Hide Your Relations in a Social Network ⚔Attack 📝arXiv 2018
319 Fast Gradient Attack on Network Embedding ⚔Attack 📝arXiv 2018
320 Data Poisoning Attack against Unsupervised Node Embedding Methods ⚔Attack 📝arXiv 2018
321 Fake Node Attacks on Graph Convolutional Networks ⚔Attack 📝arXiv 2018
322 Deep Learning on Graphs: A Survey 📃Survey 📝arXiv'2018 2018
323 Adversarial Attack and Defense on Graph Data: A Survey 📃Survey 📝arXiv'2018 2018
324 Adversarial Personalized Ranking for Recommendation 🛡Defense 📝SIGIR :octocat:Code 2018
325 Adversarial Sets for Regularising Neural Link Predictors ⚔Attack 📝UAI :octocat:Code 2017
326 Practical Attacks Against Graph-based Clustering ⚔Attack 📝CCS 2017