The example app for running text-to-image or image-to-image models to generate images using Apple's Core ML Stable Diffusion implementation
CoreML stable diffusion image generation
- Put at least one of your prepared
split_einsum
model into the local model folder (The example app supports onlysplit_einsum
models. In terms of performancesplit_einsum
is the fastest way to get result) - Pick up the model that was placed at the local folder from the list. Click update button if you added a model while app was launched
- Enter a prompt or pick up a picture and press "Generate" (You don't need to prepare image size manually) It might take up to a minute or two to get the result
coreml-stable-diffusion-2-base
-
You need to have Xcode 13 installed in order to have access to Documentation Compiler (DocC)
-
Go to Product > Build Documentation or ⌃⇧⌘ D