This repository contains the two Jupyter notebooks used during the May 31, 2019 GANocracy tutorial at MIT.
Please complete the setup instructions before running the notebooks.
by: David Bau, MIT
When GANs generate images, are they simply reproducing memorized pixel patterns, or are they composing images from learned objects? How do different architectures affect what the GAN learns? Which neurons are responsible for undesirable artifacts in generated images?
GANdissect [GitHub, paper] is an analytic framework for visualizing the internal representations of a GAN generator at the unit-, object-, and scene-level.
Image credit: Bau, David, et al. ["GAN Dissection: Visualizing and Understanding Generative Adversarial Networks."](https://arxiv.org/pdf/1811.10597.pdf) arXiv preprint arXiv:1811.10597 (2018).GANdissect helps shed light on what representations GANs are learning across layers, models, and datasets, and we can use that knowledge to compare, improve, and better control GAN performance.
by: Alex Andonian, MIT
How do you actually build and train a Generative Adversarial Network? What are best practices, tips, and tricks to help simplify the process?
This notebook offers a step-by-step walk-through in PyTorch of Deep Convolutional GAN (DCGAN) training, from data preparation and ingestion through results analysis.
Image credit: Alex Andonian