Skip to content

Latest commit

 

History

History
70 lines (63 loc) · 2.01 KB

README.md

File metadata and controls

70 lines (63 loc) · 2.01 KB

Keyframe Interpolation with Stable Video Diffusion

Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation
Xiaojuan Wang, Boyang Zhou, Brian Curless, Ira Kemelmacher, Aleksander Holynski, Steve Seitz
arXiv Project Page

Input frame 1 Generated video Input frame 2

Quick Start

1. Setup repository and environment

git clone https://github.com/jeanne-wang/svd_keyframe_interpolation.git
cd svd_keyframe_interpolation

conda env create -f environment.yml

2. Download checkpoint

Download the finetuned checkpoint, and put it under checkpoints/.

3. Launch the inference script!

The example input keyframe pairs are in examples/ folder, and the corresponding interpolated videos (1024x576, 25 frames) are placed in results/ folder.
To interpolate, run:

bash keyframe_interpolation.sh

Light-weight finetuing

The synthetic training videos dataset will be released soon.