Skip to content

CoreML swift stable diffusion image to image generation Swiftui example for using CoreML diffusion model in macos real-time applications The example app to run text to image or image to image models GPT

License

Notifications You must be signed in to change notification settings

panOpenAPI/coreml-stable-diffusion-swift-example

 
 

Repository files navigation

CoreML stable diffusion image generation example

The example app for running text-to-image or image-to-image models to generate images using Apple's Core ML Stable Diffusion implementation

SwiftUI example for the package

CoreML stable diffusion image generation

The concept

How to use

  1. Put at least one of your prepared split_einsum model into the local model folder (The example app supports only split_einsum models. In terms of performance split_einsum is the fastest way to get result)
  2. Pick up the model that was placed at the local folder from the list. Click update button if you added a model while app was launched
  3. Enter a prompt or pick up a picture and press "Generate" (You don't need to prepare image size manually) It might take up to a minute or two to get the result

The concept

Model set example

coreml-stable-diffusion-2-base

Documentation(API)

  • You need to have Xcode 13 installed in order to have access to Documentation Compiler (DocC)

  • Go to Product > Build Documentation or ⌃⇧⌘ D

    The concept

Case study

Deploying Transformers on the Apple Neural Engine

About

CoreML swift stable diffusion image to image generation Swiftui example for using CoreML diffusion model in macos real-time applications The example app to run text to image or image to image models GPT

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Swift 100.0%