This repository contains the code and neural networks from the paper:
Empirical Analysis of Upper Bounds for Robustness Distributions using Adversarial Attacks
Author(s): Aaron Berger, Nils Eberhardt, Annelot W. Bosman, Henning Duwe, Holger H. Hoos and Jan N. van Rijn
Published in: [Conference/Journal Name, Year]
citation key:
This repository contains:
- Pre-trained models in ONNX format.
- Experiment scripts and instructions for reproducing the results on MNIST.
In the experiments folder, one can find scripts to compute robustness distributions using the following algorithms:
- alpha-beta-CROWN
- AutoAttack
- fab-attack
- Fast Gradient Sign Method (FGSM) Goodfellow et al., 2015
- Projected Gradient Descent (PGD) Madry et al., 2018
This project uses the following external packages:
- VERONA: An open-source package for creating Robustness Distributions.