Day: 6 August 2019
A Discussion of ‘Adversarial Examples Are Not Bugs, They Are Features’: Learning from Incorrectly Labeled Data
Section 3.2 of Ilyas et al. (2019) shows that training a model on only adversarial errors leads to non-trivial generalization on the original test set. We...
WeiterlesenA Discussion of ‘Adversarial Examples Are Not Bugs, They Are Features’: Adversarial Examples are Just Bugs, Too
Refining the source of adversarial examples
WeiterlesenA Discussion of ‘Adversarial Examples Are Not Bugs, They Are Features’: Adversarially Robust Neural Style Transfer
An experiment showing adversarial robustness makes neural style transfer work on a non-VGG architecture
WeiterlesenA Discussion of ‘Adversarial Examples Are Not Bugs, They Are Features’: Two Examples of Useful, Non-Robust Features
An example project using webpack and svelte-loader and ejs to inline SVGs
WeiterlesenA Discussion of ‘Adversarial Examples Are Not Bugs, They Are Features’: Robust Feature Leakage
An example project using webpack and svelte-loader and ejs to inline SVGs
WeiterlesenA Discussion of ‘Adversarial Examples Are Not Bugs, They Are Features’: Adversarial Example Researchers Need to Expand What is Meant by ‘Robustness’
The main hypothesis in Ilyas et al. (2019) happens to be a special case of a more general principle that is commonly accepted in the robustness...
WeiterlesenA Discussion of ‘Adversarial Examples Are Not Bugs, They Are Features’
Six comments from the community and responses from the original authors
Weiterlesen