Simple Tensorflow TPU implementation of "Large Scale GAN Training for High Fidelity Natural Image Synthesis" (BigGAN)
I (David Mack) have been modifying this network to allow for configuration of its self-attention, to facilitate experiments into the effectiveness of different self-attention architectures.
- TODO: Implement BigGAN-deep architecture (simpler class embedding, deeper resblock)
- TODO: Explore whether
orthogonal initialization
(paper's method) should be used instead ofrandom normal initialization
(current implementation) - TODO: Implement exponential average parameter/batch norm sampling during prediction and evaluation
- TODO: Find bug in inception score and implement FID
For ImageNet, use TensorFlow's build scripts to create TFRecord files of your chosen image size (e.g. 128x128). --tfr-format inception
You can also use the data build script from NVidia's Progressive Growing of GANs. --tfr-format progan
You can train on a Google TPU by setting the name of your TPU as an env var and running one of the training scripts. For example,
./launch_train_tpu_sagan.sh --tpu-name node-1
You need to have your training data stored on a Google cloud bucket.
You're very welcome to! Submit a PR or contact the author(s)
Junho Kim, David Mack