Skip to content

Commit

Permalink
Merge pull request #65 from invoke-ai/peft
Browse files Browse the repository at this point in the history
Migrate to using HF PEFT
  • Loading branch information
RyanJDick authored Jan 9, 2024
2 parents c82e0fb + 27f5be5 commit 6bd6ac3
Show file tree
Hide file tree
Showing 32 changed files with 465 additions and 1,456 deletions.
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@

A library for training custom Stable Diffusion models (fine-tuning, LoRA training, textual inversion, etc.) that can be used in [InvokeAI](https://github.com/invoke-ai/InvokeAI).

> [!WARNING]
> `invoke-training` is still under active development, and breaking changes are likely. Full backwards compatibility will not be guranteed until v1.0.0.
> In the meantime, I recommend pinning to a specific commit hash.
## Documentation

https://invoke-ai.github.io/invoke-training/
Expand Down
5 changes: 3 additions & 2 deletions configs/finetune_lora_sd_pokemon_1x8gb_example.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
type: FINETUNE_LORA_SD
seed: 1
output:
base_output_dir: output/
base_output_dir: output/finetune_lora_sd_pokemon/

optimizer:
learning_rate: 1.0
Expand All @@ -28,12 +28,13 @@ data_loader:
dataset_name: lambdalabs/pokemon-blip-captions
image_transforms:
resolution: 512
dataloader_num_workers: 4

# General
model: runwayml/stable-diffusion-v1-5
gradient_accumulation_steps: 1
mixed_precision: fp16
xformers: True
xformers: False
gradient_checkpointing: True
# Dataset size is 833. Set max_train_steps to train for 2 epochs.
# ceil(833 / 4) * 3
Expand Down
4 changes: 2 additions & 2 deletions configs/finetune_lora_sdxl_pokemon_1x24gb_example.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
type: FINETUNE_LORA_SDXL
seed: 1
output:
base_output_dir: output/
base_output_dir: output/finetune_lora_sdxl_pokemon/

optimizer:
learning_rate: 1.0
Expand All @@ -33,7 +33,7 @@ model: stabilityai/stable-diffusion-xl-base-1.0
vae_model: madebyollin/sdxl-vae-fp16-fix
gradient_accumulation_steps: 1
mixed_precision: fp16
xformers: True
xformers: False
gradient_checkpointing: True
# Dataset size is 833. Set max_train_steps to train for 2 epochs.
# ceil(833 / 6) * 2
Expand Down
4 changes: 2 additions & 2 deletions configs/finetune_lora_sdxl_pokemon_1x8gb_example.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
type: FINETUNE_LORA_SDXL
seed: 1
output:
base_output_dir: output/
base_output_dir: output/finetune_lora_sdxl_pokemon/

optimizer:
learning_rate: 1.0
Expand All @@ -37,7 +37,7 @@ cache_text_encoder_outputs: True
enable_cpu_offload_during_validation: True
gradient_accumulation_steps: 4
mixed_precision: fp16
xformers: True
xformers: False
gradient_checkpointing: True
# Dataset size is 833. Set max_train_steps to train for 2 epochs.
# ceil(833 / 4) * 3
Expand Down
14 changes: 12 additions & 2 deletions docs/get_started/quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,15 +27,25 @@ Monitor the training process with Tensorboard by running `tensorboard --logdir o
![Screenshot of the Tensorboard UI showing validation images.](../images/tensorboard_val_images_screenshot.png)
*Validation images in the Tensorboard UI.*

### 5. InvokeAI
### 5. Select a checkpoint
Select a checkpoint based on the quality of the generated images. In this short training run, there are only 3 checkpoints to choose from. As an example, we'll use the **Epoch 2** checkpoint.

Internally, `invoke-training` stores the LoRA checkpoints in [PEFT format](https://huggingface.co/docs/peft/v0.7.1/en/package_reference/peft_model#peft.PeftModel.save_pretrained). We will convert the selected checkpoint to 'Kohya' format, because it has more widespread support across various UIs:
```bash
# Note: You will have to replace the timestamp in the checkpoint path.
python src/invoke_training/scripts/convert_sd_lora_to_kohya_format.py \
--src-ckpt-dir output/finetune_lora_sd_pokemon/1691088769.5694647/checkpoint_epoch-00000002 \
--dst-ckpt-file output/finetune_lora_sd_pokemon/1691088769.5694647/checkpoint_epoch-00000002_kohya.safetensors
```

### 5. InvokeAI

If you haven't already, setup [InvokeAI](https://github.com/invoke-ai/InvokeAI) by following its documentation.

Copy your selected LoRA checkpoint into your `${INVOKEAI_ROOT}/autoimport/lora` directory. For example:
```bash
# Note: You will have to replace the timestamp in the checkpoint path.
cp output/1691088769.5694647/checkpoint_epoch-00000002.safetensors ${INVOKEAI_ROOT}/autoimport/lora/pokemon_epoch-00000002.safetensors
cp output/finetune_lora_sd_pokemon/1691088769.5694647/checkpoint_epoch-00000002_kohya.safetensors ${INVOKEAI_ROOT}/autoimport/lora/pokemon_epoch-00000002.safetensors
```

You can now use your trained Pokemon LoRA in the InvokeAI UI! 🎉
Expand Down
7 changes: 4 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,12 @@ classifiers = [
"Operating System :: OS Independent",
]
dependencies = [
"accelerate~=0.21.0",
"accelerate~=0.25.0",
"datasets~=2.14.3",
"diffusers~=0.24.0",
"diffusers~=0.25.0",
"numpy",
"omegaconf",
"peft~=0.7.0",
"Pillow",
"prodigyopt",
"pydantic",
Expand All @@ -29,7 +30,7 @@ dependencies = [
"torch>=2.1.2",
"torchvision",
"tqdm",
"transformers~=4.35.0",
"transformers~=4.36.0",
"xformers>=0.0.23",
]

Expand Down
Empty file.
Empty file.
Empty file.
43 changes: 0 additions & 43 deletions src/invoke_training/core/lora/injection/lora_layer_collection.py

This file was deleted.

108 changes: 0 additions & 108 deletions src/invoke_training/core/lora/injection/stable_diffusion.py

This file was deleted.

Loading

0 comments on commit 6bd6ac3

Please sign in to comment.