Skip to content

ASU-APG/Targeted-Physical-Adversarial-Attacks-on-AD

Repository files navigation

Targeted-Physical-Adversarial-Attacks-on-AD

Targeted Attack on Deep RL-based Autonomous Driving with Learned Visual Patterns

Paper accepted at ICRA 2022

Website

Overview

The end to end implementation of paper follows a series of below steps

  1. Obtaining a policy
  2. Collecting data from simulator
  3. Training dynamics model
  4. Optimizing the perturbation for physical adversarial attack
  5. Testing the physical adversarial attack
  6. Evaluating robustness of attack to object position

Please refer to INSTALL.md for setting up the environment for this repository.

1. Policy

We used a policy obtained using Actor-Critic algorithm for which the pretrained agent is taken from this repo and modified a bit for our use.

2. Data Collection

We collect data by running the agent with pretrained + noise policy as explained in the paper. This is done using below commands for all three driving scenarios.

Scenario - Straight

python data_collection/generation_script.py --same-track --rollouts 1 --rootdir datasets --policy pre --scenario straight
python data_collection/generation_script.py --same-track --rollouts 9999 --rootdir datasets --policy pre_noise --scenario straight

Scenario - Left Turn

python data_collection/generation_script.py --same-track --rollouts 1 --rootdir datasets --policy pre --scenario left_turn
python data_collection/generation_script.py --same-track --rollouts 9999 --rootdir datasets --policy pre_noise --scenario left_turn

Scenario - Right Turn

python data_collection/generation_script.py --same-track --rollouts 1 --rootdir datasets --policy pre --scenario right_turn
python data_collection/generation_script.py --same-track --rollouts 9999 --rootdir datasets --policy pre_noise --scenario right_turn

NOTE: The data collection scripts provide multiple options to get more datasets with different policy types if wanted.

3. Dynamics model

Dynamics model is trained using two models (VAE and MDRNN)

VAE

python dynamics_model/trainvae.py --dataset scenario_straight

MDRNN

python dynamics_model/trainmdrnn.py --dataset scenario_straight

NOTE: Change --dataset argument to scenario_left_turn and scenario_right_turn for other two driving scenarios respectively.

4. Generate Adversarial Perturbations

python attacks/optimize.py --scenario straight

To optimize for other scenarios, change scenario argument to left_turn and right_turn respectively.

TIP: use --help to know available arguments and play around with different time steps, perturbation strength etc.

The perturbations are by default saved in attacks/perturbations folder segregated by each driving scenario. Further, we are providing our optimized perturbations shown in paper under attacks/perturbations_ours folder to easily allow for testing in next step.

5. Test Physical Adversarial Attack

python attacks/test.py --scenario straight

If you want to use our perturbations, append argument --perturbs-dir attacks/perturbations_ours to the command. The above command runs all the experiments shown in the paper. Add optional argument --save to save the figures, videos in results folder. We already provided the results based on our perturbations in results folder.

6. Robustness experiment

python attacks/robustness.py --scenario straight

If you want to use our perturbations, append argument --perturbs-dir attacks/perturbations_ours to the command. The above command runs the robustness experiment shown in the paper. Add optional argument --save to save the robustness heatmap in results folder. We already provided the robustness result based on our perturbations in results folder.

Acknowledgement

We would like to thank developers of below open source code for providing policy and dynamics model implementations which are used in our code.

Citation

Please cite our paper if it is used in your research:

@article{buddareddygari2021targeted,
      title={Targeted Attack on Deep RL-based Autonomous Driving with Learned Visual Patterns},
      author={Prasanth Buddareddygari and Travis Zhang and Yezhou Yang and Yi Ren},
      year={2021},
      journal={arXiv preprint arXiv:2109.07723}
}

About

Physically Realizable Targeted Attacks on Autonomous Driving

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages