Skip to content

Commit

Permalink
Improve docs for mixed_precision setting.
Browse files Browse the repository at this point in the history
  • Loading branch information
RyanJDick committed Apr 25, 2024
1 parent 11ba77c commit d0abb4e
Show file tree
Hide file tree
Showing 6 changed files with 90 additions and 18 deletions.
17 changes: 14 additions & 3 deletions src/invoke_training/pipelines/_experimental/sd_dpo_lora/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,9 +151,20 @@ class SdDirectPreferenceOptimizationLoraConfig(BasePipelineConfig):
"""

mixed_precision: Literal["no", "fp16", "bf16", "fp8"] = "no"
"""The mixed precision mode to use ('no','fp16','bf16 or 'fp8'). This value is passed to Hugging Face Accelerate.
See accelerate.Accelerator for more details.
"""
"""The mixed precision mode to use.
If mixed precision is enabled, then all non-trainable parameters will be cast to the specified precision. The
trainable parameters are always kept in float32 precision to avoid issues with numerical stability.
Recommendations:
- `"no"`: Use this mode if you have plenty of VRAM available.
- `"bf16"`: Use this mode if you have limited VRAM and a GPU that supports bfloat16.
- `"fp16"`: Use this mode if you have limited VRAM and a GPU that does not support bfloat16.
- `"fp8"`: You are likely to run into numerical stability issues with this mode. Only use this mode if you know what you are doing and are willing to work through some issues.
This value is passed to Hugging Face Accelerate. See `accelerate.Accelerator` for more details.
""" # noqa: E501

xformers: bool = False
"""If true, use xformers for more efficient attention blocks.
Expand Down
19 changes: 16 additions & 3 deletions src/invoke_training/pipelines/stable_diffusion/lora/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,9 +111,22 @@ class SdLoraConfig(BasePipelineConfig):
"""

mixed_precision: Literal["no", "fp16", "bf16", "fp8"] = "no"
"""The mixed precision mode to use ('no','fp16','bf16 or 'fp8'). This value is passed to Hugging Face Accelerate.
See accelerate.Accelerator for more details.
"""
"""The mixed precision mode to use.
If mixed precision is enabled, then all non-trainable parameters will be cast to the specified precision. The
trainable parameters are always kept in float32 precision to avoid issues with numerical stability.
Recommendations:
- `"no"`: Use this mode if you have plenty of VRAM available.
- `"bf16"`: Use this mode if you have limited VRAM and a GPU that supports bfloat16.
- `"fp16"`: Use this mode if you have limited VRAM and a GPU that does not support bfloat16.
- `"fp8"`: You are likely to run into numerical stability issues with this mode. Only use this mode if you know what you are doing and are willing to work through some issues.
This value is passed to Hugging Face Accelerate. See
[`accelerate.Accelerator.mixed_precision`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.mixed_precision)
for more details.
""" # noqa: E501

xformers: bool = False
"""If true, use xformers for more efficient attention blocks.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -117,11 +117,22 @@ class SdTextualInversionConfig(BasePipelineConfig):
"""

mixed_precision: Literal["no", "fp16", "bf16", "fp8"] = "no"
"""The mixed precision mode to use. This value is passed to Hugging Face Accelerate.
See
"""The mixed precision mode to use.
If mixed precision is enabled, then all non-trainable parameters will be cast to the specified precision. The
trainable parameters are always kept in float32 precision to avoid issues with numerical stability.
Recommendations:
- `"no"`: Use this mode if you have plenty of VRAM available.
- `"bf16"`: Use this mode if you have limited VRAM and a GPU that supports bfloat16.
- `"fp16"`: Use this mode if you have limited VRAM and a GPU that does not support bfloat16.
- `"fp8"`: You are likely to run into numerical stability issues with this mode. Only use this mode if you know what you are doing and are willing to work through some issues.
This value is passed to Hugging Face Accelerate. See
[`accelerate.Accelerator.mixed_precision`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.mixed_precision)
for more details.
"""
""" # noqa: E501

xformers: bool = False
"""If `True`, use xformers for more efficient attention blocks.
Expand Down
19 changes: 16 additions & 3 deletions src/invoke_training/pipelines/stable_diffusion_xl/lora/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,9 +111,22 @@ class SdxlLoraConfig(BasePipelineConfig):
"""

mixed_precision: Literal["no", "fp16", "bf16", "fp8"] = "no"
"""The mixed precision mode to use ('no','fp16','bf16 or 'fp8'). This value is passed to Hugging Face Accelerate.
See accelerate.Accelerator for more details.
"""
"""The mixed precision mode to use.
If mixed precision is enabled, then all non-trainable parameters will be cast to the specified precision. The
trainable parameters are always kept in float32 precision to avoid issues with numerical stability.
Recommendations:
- `"no"`: Use this mode if you have plenty of VRAM available.
- `"bf16"`: Use this mode if you have limited VRAM and a GPU that supports bfloat16.
- `"fp16"`: Use this mode if you have limited VRAM and a GPU that does not support bfloat16.
- `"fp8"`: You are likely to run into numerical stability issues with this mode. Only use this mode if you know what you are doing and are willing to work through some issues.
This value is passed to Hugging Face Accelerate. See
[`accelerate.Accelerator.mixed_precision`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.mixed_precision)
for more details.
""" # noqa: E501

xformers: bool = False
"""If true, use xformers for more efficient attention blocks.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -145,9 +145,22 @@ class SdxlLoraAndTextualInversionConfig(BasePipelineConfig):
"""

mixed_precision: Literal["no", "fp16", "bf16", "fp8"] = "no"
"""The mixed precision mode to use ('no','fp16','bf16 or 'fp8'). This value is passed to Hugging Face Accelerate.
See accelerate.Accelerator for more details.
"""
"""The mixed precision mode to use.
If mixed precision is enabled, then all non-trainable parameters will be cast to the specified precision. The
trainable parameters are always kept in float32 precision to avoid issues with numerical stability.
Recommendations:
- `"no"`: Use this mode if you have plenty of VRAM available.
- `"bf16"`: Use this mode if you have limited VRAM and a GPU that supports bfloat16.
- `"fp16"`: Use this mode if you have limited VRAM and a GPU that does not support bfloat16.
- `"fp8"`: You are likely to run into numerical stability issues with this mode. Only use this mode if you know what you are doing and are willing to work through some issues.
This value is passed to Hugging Face Accelerate. See
[`accelerate.Accelerator.mixed_precision`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.mixed_precision)
for more details.
""" # noqa: E501

xformers: bool = False
"""If true, use xformers for more efficient attention blocks.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -117,11 +117,22 @@ class SdxlTextualInversionConfig(BasePipelineConfig):
"""

mixed_precision: Literal["no", "fp16", "bf16", "fp8"] = "no"
"""The mixed precision mode to use. This value is passed to Hugging Face Accelerate.
See
"""The mixed precision mode to use.
If mixed precision is enabled, then all non-trainable parameters will be cast to the specified precision. The
trainable parameters are always kept in float32 precision to avoid issues with numerical stability.
Recommendations:
- `"no"`: Use this mode if you have plenty of VRAM available.
- `"bf16"`: Use this mode if you have limited VRAM and a GPU that supports bfloat16.
- `"fp16"`: Use this mode if you have limited VRAM and a GPU that does not support bfloat16.
- `"fp8"`: You are likely to run into numerical stability issues with this mode. Only use this mode if you know what you are doing and are willing to work through some issues.
This value is passed to Hugging Face Accelerate. See
[`accelerate.Accelerator.mixed_precision`](https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.mixed_precision)
for more details.
"""
""" # noqa: E501

xformers: bool = False
"""If `True`, use xformers for more efficient attention blocks.
Expand Down

0 comments on commit d0abb4e

Please sign in to comment.