Skip to content

Latest commit

 

History

History
62 lines (32 loc) · 2.09 KB

README.md

File metadata and controls

62 lines (32 loc) · 2.09 KB

Real Time Style Transfer

This is the Torch Implementation for Perceptual Losses for Real-Time Style Transfer and Super-Resolution paper.

Results

Preliminary working example and output has been uploaded. I'll update soon with better ones.

Trained on 256x256 resized images

Style Image -- Content Image -- Output (256x256) version

Output (512x512) version

How to Run

To Train

th main.lua -style style_image.jpg -dir <path to training images>

To Stylize with trained model

th stylize.lua -test test_image.lua -model <path to trained model file>

What has been implemented

  • Only Style Transfer has been implemented so far
  • Updated code to reflect changes in the paper for removing border artifacts
  • Residual architecture and non-residual (flattened) architecture implemented
  • Average pooling and Max pooling options
  • Video version coming soon. Check out this example in Chainer

Requirements

Unfortunately it is not fully CPU compatible yet and requires a GPU to run

  • Torch Packages - Image,XLua,Cutorch,Cunn,Optim,

  • cuDNN/CUDA

Details

  • Trained on MS COCO TrainSet (~80,000 images) over two epochs on a NVIDIA TitanX gpu. Takes about ~6 hours
  • Model file is available Output/Styles/transformNet.t7

This implementation reuses some code from these excellent repos: