Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?

Pdf: Arxiv, Openreview

Code for our ICLR 2022 paper where we show that synthetic data from diffusion models can provide a tremendous boost in the performance of robust training. We also provide synthetic data used in the paper for all five datasets, namely CIFAR-10, CIFAR-100, ImageNet, CelebA, and AFHQ. We also provide synthetic data from seven different generative models for CIFAR-10, which was used to analyze impact of different generative models in section 3.2.

Despite being minimalistic, this codebase also offers multi-node and multi-gpu adversarial training support.

Getting started

Let’s start by installing all dependencies.

  • pip install torch torchvision easydict
  • pip install git+https://github.com/RobustBench/robustbench
  • pip install git+https://github.com/fra31/auto-attack

Training a robust classifier

We can perform adversarial training on four GPUs using the following command.

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m

 

 

 

To finish reading, please visit source site