Authors: Max Zimmer, Christoph Spiegel, Sebastian Pokutta
This repository contains the code to reproduce the experiments from the "Compression-aware Training of Neural Networks using Frank-Wolfe" (arXiv:2205.11921) paper. The code is based on PyTorch 1.9 and the experiment-tracking platform Weights & Biases.
Experiments are started from the following file:
main.py
: Starts experiments using the dictionary format of Weights & Biases.
The rest of the project is structured as follows:
strategies
: Contains all used sparsification methods.runners
: Contains classes to control the training and collection of metrics.metrics
: Contains all metrics as well as FLOP computation methods.models
: Contains all model architectures used.optimizers
: Contains reimplementations of SFW, SGD and Proximal SGD.
In case you find the paper or the implementation useful for your own research, please consider citing:
@Article{zimmer2022,
author = {Max Zimmer and Christoph Spiegel and Sebastian Pokutta},
title = {Compression-aware Training of Neural Networks using Frank-Wolfe},
year = {2022},
archiveprefix = {arXiv},
eprint = {2205.11921},
primaryclass = {cs.LG},
}