Insturctor: Prof. CHEN, Qifeng
A large part of this course is modeled according to CS231n offered at the Stanford University.
- KNN
- SVM
- Softmax
- Two-Layer Nerual Network
- Higher Level Representations (100/100)
- Fully-conneccted Neural Network
- Batch Normalization
- Dropout
- Convolutional Networks
- PyTorch / TensorFlow on CIFAR-10 (100/100)
- Image Captioning with Vanilla RNNs
- Image Captioning with LSTMs
- Network Visualization: Saliency maps, Class Visualization, Fooling Images
- Style Transfer
- Generative Adversarial Networks (100/100)
Image synthesis has been an active research area in computer vision. Many existing models can generate plausible and high-quality results in different generation tasks. However, these models are usually task-specific and computationally expansive, requiring large datasets and excessive effort to train. For many developers with limited resources, a simpler model that requires a smaller dataset and less computational cost is preferred. We tried to construct a simple network combining GAN and U-net that is applicable to different generation tasks without imposing training overhead or modifying the original network structures. To validate our network performance, we evaluated it qualitatively on two image synthesis tasks including image colorization and image inpainting.