Daniel Kang * 1 2 Yi Sun * 1 3 Tom Brown 1 Dan Hendrycks 4 Jacob Steinhardt 1
Abstract
We study the transfer of adversarial robustness of deep neural networks between different perturba- tion types. While most work on adversarial exam- ples has focused on L∞ and L2-bounded pertur- bations, these do not capture all types of perturba- tions available to an adversary. The present work evaluates 32 attacks of 5 different types against models adversarially trained on a 100-class sub- set of ImageNet. Our empirical results suggest that evaluating on a wide range of perturbation sizes is necessary to understand whether adver- sarial robustness transfers between perturbation types. We further demonstrate that robustness against one perturbation type may not always im- ply and may sometimes hurt robustness against other perturbation types. In light of these results, we recommend evaluation of adversarial defenses take place on a diverse range of perturbation types and sizes.
1. Introduction
Deep networks have shown remarkable accuracy on bench- mark tasks (He et al., 2016), but can also be fooled by imperceptible changes to inputs, known as adversarial ex- amples (Goodfellow et al., 2014). In response, researchers have studied the robustness of models, or how well mod- els generalize in the presence of (potentially adversarial) bounded perturbations to inputs.
How can we tell if a model is robust? Evaluating model ro- bustness is challenging because, while evaluating accuracy only requires a fixed distribution, evaluating the robustness of a model requires that the model have good performance in the presence of many, potentially hard to anticipate and model, perturbations. In the context of image classifica-
tion, considerable work has focused on robustness to “L∞– bounded” perturbations (perturbations with bounded per- pixel magnitude) (Goodfellow et al., 2014; Madry et al., 2017; Xie et al., 2018). However, models hardened against L∞-bounded perturbations are still vulnerable to even small, perceptually minor departures from this family, such as small rotations and translations (Engstrom et al., 2017). Meanwhile, researchers continue to develop creative attacks that are difficult to even mathematically specify, such as fake eyeglasses, adversarial stickers, and 3D-printed objects (Sharif et al., 2018; Brown et al., 2017; Athalye et al., 2017).
The perspective of this paper is that any single, simple-to- define type of perturbation is likely insufficient to capture what a deployed model will be subject to in the real world. To address this, we investigate robustness of models with respect to a broad range of perturbation types. We start with the following question:
When and how much does robustness to one type of perturbation transfer to other perturbations?
We study this question using adversarial training, a strong technique for adversarial defense applicable to any fixed at- tack (Goodfellow et al., 2014; Madry et al., 2017). We eval- uate 32 attacks of 5 different types–L∞ (Goodfellow et al., 2014), L2 (Carlini & Wagner, 2017), L1 (Chen et al., 2018), elastic deformations (Xiao et al., 2018), and JPEG (Shin & Song, 2017)–against adversarially trained ResNet-50 mod- els on a 100-class subset of full-resolution ImageNet.
Our results provide empirical evidence that models robust under one perturbation type are not necessarily robust under other natural perturbation types. We show that:
- Evaluating on a carefully chosen range of perturbation sizes is important for measuring robustness transfer.
- Adversarial training against the elastic deformation attack demonstrates that adversarial robustness against one perturbation type can transfer poorly to and at times hurt robustness to other perturbation types.
- Adversarial training against the L2 attack may be better than training against the widely used L∞ attack.
While any given set of perturbation types may not encompass all potential perturbations that can occur in practice, our results demonstrate that robustness can fail to transfer even across a small but diverse set of perturbation types. Prior work in this area (Sharma & Chen, 2017; Jordan et al., 2019; Trame`r & Boneh, 2019) has studied transfer using single values of ε for each attack on lower resolution datasets; we believe our larger-scale study provides a more comprehensive and interpretable view on transfer between these attacks. We therefore suggest considering performance against several different perturbation types and sizes as a first step for rigorous evaluation of adversarial defenses.
2. Adversarial attacks
We consider five types of adversarial attacks under the fol- lowing framework. Let f : R3×224×224 R100 be a model mapping images to logits1, and let l!(f (x), y) denote the cross-entropy loss. For an input x with true label y and a target class yt /= y, the attacks attempt to find xt such that
- the attacked image xt is a perturbation of x, constrained in a sense which differs for each attack, and
- the loss l!(f (xt), yt) is minimized (targeted attack).
We consider the targeted setting and the following attacks, described in more detail below:
- L∞ (Goodfellow et al., 2014)
- L2 (Szegedy et al., 2013; Carlini & Wagner, 2017)
- L1 (Chen et al., 2018)
- JPEG
- Elastic deformation (Xiao et al., 2018)
The L∞ and L2 attacks are standard in the adversarial ex- amples literature (Athalye et al., 2018; Papernot et al., 2016; Madry et al., 2017; Carlini & Wagner, 2017) and we chose the remaining attacks for diversity in perturbation type. We now describe each attack, with sample images in Figure 1 and Appendix A. We clamp output pixel values to [0, 255].
For Lp attacks with p 1, 2, , the constraint allows an image x R3×224×224, viewed as a vector of RGB pixel values, to be modified to an attacked image xt = x + δ with
llxt − xllp ≤ ε,
where \ · \p denotes the Lp-norm on R3×224×224. For the L∞ and L2 attacks, we optimize using randomly-initialized projected gradient descent (PGD), which optimizes the per-
turbation δ by gradient descent and projection to the L∞ and L2 balls (Madry et al., 2017). For the L1 attack, we use the randomly-initialized Frank-Wolfe algorithm (Frank & Wolfe, 1956), detailed in Appendix C. We believe that our Frank-Wolfe algorithm is more principled than the optimization used in existing L1 attacks such as EAD.
As discussed in Shin & Song (2017) as a defense, JPEG compression applies a lossy linear transformation based on the discrete cosine transform (denoted by JPEG) to image space, followed by quantization. The JPEG attack, which we believe is new to this work, imposes on the attacked image xt an L∞-constraint in this transformed space:
llJPEG(x) − JPEG(xt)ll∞ ≤ ε.
We optimize z = JPEG(xt) with randomly initialized PGD and apply a right inverse of JPEG to obtain the attacked image.
The elastic deformation attack allows perturbations
xt = Flow(x, V ),
where V : 1, . . . , 224 2 R2 is a vector field on pixel space, and Flow sets the value of pixel (i, j) to the (bilinearly interpolated) value at (i, j) + V (i, j). We constrain V to be the convolution of a vector field W with a 25 25 Gaussian kernel with standard deviation 3, and enforce that
llW (i, j)ll∞ ≤ ε for i, j ∈ {1, . . . , 224}.
We optimize the value of W with randomly initialized PGD. Note that our attack differs in details from Xiao et al. (2018), but is similar in spirit.
3. Experiments
We measure transfer of adversarial robustness by evaluating our attacks against adversarially trained models. For each attack, we adversarially train models against the attack for a range of perturbation sizes ε. We then evaluate each adversarially trained model against each attack, giving the 2-dimensional accuracy grid of attacks evaluated against adversarially trained models shown in Figure 2 (analyzed in detail in Section 3.2).
3.1. Experimental setup
Dataset and model. We use the 100-class subset of ImageNet-1K (Deng et al., 2009) containing classes whose WordNet ID is a multiple of 10. We use the ResNet-50 (He et al., 2016) architecture with standard 224 224 resolution as implemented in torchvision. We believe this full resolution is necessary for the elastic and JPEG attacks.
Training hyperparameters. We trained on machines with 8 Nvidia V100 GPUs using standard data augmentation practices (He et al., 2016). Following best practices for multi-GPU training (Goyal et al., 2017), we used synchronized SGD for 90 epochs with a batch size of 32×8 and a learning rate schedule in which the learning rate is “warmed up” for 5 epochs and decayed at epochs 30, 60, and 80 by a factor of 10. Our initial learning rate after warm-up was 0.1, momentum was 0.9, and weight decay was 5 × 10−6.
Adversarial training. We harden models against attacks using adversarial training (Madry et al., 2017). To train against attack A, for each mini-batch of training images, we select target classes for each image uniformly at random from the 99 incorrect classes. We generate adversarial images by applying the targeted attack A to the current model with ε chosen uniformly at random between 0 and εmax. Finally, we update the model with a step of synchronized SGD using these adversarial images alone.
We list attack parameters used for training in Table 1. For the PGD attack, we chose step size √ ε , motivated by the fact that taking step size proportional to 1/√steps is optimal for non-smooth convex functions (Nemirovski & Yudin, 1978; 1983). Note that the greater number of PGD
steps for elastic deformation is due to the greater difficulty of its optimization problem, which we are not confident is fully solved even with this greater number of steps.
Attack hyperparameters. We evaluate our adversarially trained models on the (subsetted) ImageNet-1K validation set against targeted attacks with target chosen uniformly at random from among the 99 incorrect classes. We list attack parameters for evaluation in Table 1. As suggested in (Carlini et al., 2019), we use more steps for evaluation than for adversarial training to ensure PGD converges.
3.2. Results and analysis
Using the results of our adversarial training and evaluation experiments in Figure 2, we draw the following conclusions.
Choosing ε well is important. Because attack strength in- creases with the allowed perturbation magnitude ε, compar- ing robustness between different perturbation types requires a careful choice of ε for both attacks. First, we observe that a range of ε yielding comparable attack strengths should be used for all attacks to avoid drawing misleading conclusions. We suggest the following principles for choosing this range, which we followed for the parameters in Table 1:
- Models adversarially trained against the minimum value of ε should have validation accuracy comparable to that of a model trained on unattacked data.
- Attacks with the maximum value of ε should substantially reduce validation accuracy in adversarial training or perturb the images enough to confuse humans.
To illustrate this point, we provide in Appendix B a subset of Figure 2 with ε ranges that differ in strength between attacks; the (deliberately) biased ranges of ε chosen in this subset cause the L1 and elastic attacks to be perceived as stronger than our full results reveal.
Second, even if two attacks are evaluated on ranges of ε of comparable strength, the specific values of ε chosen within those ranges may be important. In our experiments, we scaled ε geometrically for all attacks, but when interpreting our results, attack strength may not scale in the same way with ε for different attacks. As a result, we only draw conclusions which are invariant to the precise scaling of
attack strength with ε. We illustrate this type of analysis with the following two examples.
Robustness against elastic transfers poorly to the other attacks. In Figure 2, the accuracies of models adversari- ally trained against elastic are higher against elastic than the other attacks, meaning that for these values of ε, robustness against elastic does not imply robustness against other at- tacks. On the other hand, training against elastic with ε 4 generally increases accuracy against elastic with ε 4, but decreases accuracy against all other attacks.
Together, these imply that the lack of transfer we observe in Figure 2 is not an artifact of the specific values of ε we chose, but rather a broader effect at the level of perturbation types. In addition, this example shows that increasing robustness to larger perturbation sizes of a given type can hurt robustness to other perturbation types. This effect is only visible by considering an appropriate range of ε and cannot be detected from a single value of ε alone.
L2 adversarial training is weakly better than L∞. Com- paring rows of Figure 2 corresponding to training against L2 with ε ∈ {300, 600, 1200, 2400, 4800} with rows corresponding to training against L∞ with ε 1, 2, 4, 8, 16 , we see that training against L2 yields slightly lower accuracies against L∞ attacks and higher accuracies against all other attacks. Because this effect extends to all ε for which training against L∞ is helpful, it does not depend on the relation between L∞ attack strength and ε. In fact, against the stronger half of our attacks, training against L2 with ε = 4800 gives comparable or better accuracy to training against L∞ with adaptive choice of ε. This provides some evidence that L2 is more effective to train against than L∞.
4. Conclusion
This work presents an empirical study of when and how much robustness transfers between different adversarial perturbation types. Our results on adversarial training and evaluation of 32 different attacks on a 100-class subset of ImageNet-1K highlight the importance of considering a di- verse range of perturbation sizes and types for assessing transfer between types, and we recommend this as a guide- line for evaluating adversarial robustness.
Acknowledgements
D. K. was supported by NSF Grant DGE-1656518. Y. S. was supported by a Junior Fellow award from the Simons Foundation and NSF Grant DMS-1701654. D. K., Y. S., and J. S. were supported by a grant from the Open Philanthropy Project.
References
Athalye, A., Engstrom, L., Ilyas, A., and Kwok, K. Synthesizing robust adversarial examples. CoRR, abs/1707.07397, 2017. URL http://arxiv.org/ abs/1707.07397.
Athalye, A., Carlini, N., and Wagner, D. Obfuscated gradients give a false sense of security: Circumvent- ing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
Brown, T. B., Mane´, D., Roy, A., Abadi, M., and Gilmer, J. Adversarial patch. CoRR, abs/1712.09665, 2017. URL http://arxiv.org/abs/1712.09665.
Carlini, N. and Wagner, D. Towards evaluating the robust- ness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE, 2017.
Carlini, N., Athalye, A., Papernot, N., Brendel, W., Rauber, J., Tsipras, D., Goodfellow, I. J., Madry, A., and Ku- rakin, A. On evaluating adversarial robustness. CoRR, abs/1902.06705, 2019. URL http://arxiv.org/ abs/1902.06705.
Chen, P.-Y., Sharma, Y., Zhang, H., Yi, J., and Hsieh, C.-
J. EAD: Elastic-net attacks to deep neural networks via adversarial examples. In Thirty-second AAAI conference on artificial intelligence, 2018.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei,
L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. IEEE, 2009.
Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., and Madry, A. A rotation and a translation suffice: Fool- ing CNNs with simple transformations. arXiv preprint arXiv:1712.02779, 2017.
Frank, M. and Wolfe, P. An algorithm for quadratic pro- gramming. Naval research logistics quarterly, 3(1-2): 95–110, 1956.
Goodfellow, I. J., Shlens, J., and Szegedy, C. Explain- ing and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
Goyal, P., Dolla´r, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and He,
K. Accurate, large minibatch SGD: Training Imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
He, K., Zhang, X., Ren, S., and Sun, J. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630–645. Springer, 2016.
Jordan, M., Manoj, N., Goel, S., and Dimakis, A. G. Quan- tifying Perceptual Distortion of Adversarial Examples. arXiv e-prints, art. arXiv:1902.08265, Feb 2019.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
Nemirovski, A. and Yudin, D. On Cezari’s convergence of the steepest descent method for approximating saddle point of convex-concave functions. In Soviet Math. Dokl, volume 19, pp. 258–269, 1978.
Nemirovski, A. and Yudin, D. Problem Complexity and Method Efficiency in Optimization. Intersci. Ser. Discrete Math. Wiley, New York, 1983.
Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami,
A. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE, 2016.
Sharif, M., Bhagavatula, S., Bauer, L., and Reiter, M. K. Ad- versarial generative nets: Neural network attacks on state- of-the-art face recognition. CoRR, abs/1801.00349, 2018. URL http://arxiv.org/abs/1801.00349.
Sharma, Y. and Chen, P.-Y. Attacking the Madry Defense Model with L1-based Adversarial Examples. arXiv e- prints, art. arXiv:1710.10733, Oct 2017.
Shin, R. and Song, D. JPEG-resistant adversarial images. In NIPS 2017 Workshop on Machine Learning and Com- puter Security, 2017.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
Trame`r, F. and Boneh, D. Adversarial Training and Ro- bustness for Multiple Perturbations. arXiv e-prints, art. arXiv:1904.13000, Apr 2019.
Xiao, C., Zhu, J.-Y., Li, B., He, W., Liu, M., and Song,
D. Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612, 2018.
Xie, C., Wu, Y., van der Maaten, L., Yuille, A., and He, K. Feature denoising for improving adversarial robustness. arXiv preprint arXiv:1812.03411, 2018.
A. Sample attacked images
In this appendix, we give more comprehensive sample out- puts for our adversarial attacks. Figures 3 and 4 show sample attacked images for attacks with relatively large and small ε in our range, respectively. Figure 5 shows examples of how attacked images can be influenced by different types of adversarial training for defense models. In all cases, the images were generated by running the specified attack against an adversarially trained model with parameters specified in Table 1 for both evaluation and adversarial training.
B. Evaluation on a truncated ε range
In this appendix, we show in Figure 6 a subset of Figure 2 with a truncated range of ε. In particular, we omitted small values of ε for L1, elastic, and JPEG and large values of ε for L∞ and L2. The resulting accuracy grid gives several misleading impressions, including:
- The L1 attack is stronger than L∞, L2, and JPEG.
- Training against the other attacks gives almost no robustness against the elastic attack.
The full range of results in Figure 2 shows that these two purported effects are artifacts of the incorrectly truncated range of ε used in Figure 6. In particular:
- The additional smaller ε columns for the L1 attack in Figure 2 demonstrate its perceived strength in Figure 6 is an artifact of incorrectly omitting these values.
- The additional smaller ε columns for the elastic attack in Figure 2 reveal that training against the other at- tacks is effective in defending against weak versions of the elastic attack, contrary to the impression given by Figure 6.
C. L1 Attack
We chose to use the Frank-Wolfe algorithm for optimizing the L1 attack, as Projected Gradient Descent would require projecting onto a truncated L1 ball, which is a complicated operation. In contrast, Frank-Wolfe only requires optimizing linear functions gTx over a truncated L1 ball; this can be done by sorting coordinates by the magnitude of g and moving the top k coordinates to the boundary of their range (with k chosen by binary search). This is detailed in Algorithm 1.