Stable Diffusion, a cutting-edge image generation technique, but it can be further enhanced by combining it with ControlNet, a widely used control network approach. The combination allows Stable Diffusion to use a condition input to guide the image generation process, resulting in highly accurate and visually appealing images. The condition input could be in the form of various types of data such as scribbles, edge maps, pose key points, depth maps, segmentation maps, normal maps, or any other relevant information that helps to guide the content of the generated image, for example - QR codes! This method can be particularly useful in complex image generation scenarios where precise control and fine-tuning are required to achieve the desired results.
In this tutorial, we will learn how to convert and run Controlnet QR Code Monster For SD-1.5 by monster-labs. An additional part demonstrates how to run quantization with NNCF to speed up pipeline.
This notebook demonstrates how to convert, run and optimize ControlNet and Stable Diffusion using OpenVINO and NNCF.
Notebook contains the following steps:
- Create pipeline with PyTorch models using Diffusers library.
- Convert PyTorch models to OpenVINO IR format using model conversion API.
- Optimize
OVContrlNetStableDiffusionPipeline
with NNCF quantization. - Compare results of original and optimized pipelines.
- Run Stable Diffusion ControlNet pipeline with OpenVINO.
This is a self-contained example that relies solely on its own code.
We recommend running the notebook in a virtual environment. You only need a Jupyter server to start.
For details, please refer to Installation Guide.