-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] refactoring docstrings in ./src/diffusers/models/transformers/auraflow_transformer_2d.py #9715
base: main
Are you sure you want to change the base?
Conversation
…auraflow_transformer_2d.py I reopened this PR since the previous one may occur sync issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, great work with improving these!
@@ -347,8 +347,9 @@ def __init__( | |||
def attn_processors(self) -> Dict[str, AttentionProcessor]: | |||
r""" | |||
Returns: | |||
`dict` of attention processors: A dictionary containing all attention processors used in the model with | |||
indexed by its weight name. | |||
[`dict[`attention processors`]`] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would cause tests to fail too as it requires all implementation that copy from UNet2DConditionalModel.attn_processors
to have the same docs and implementation. This can be changed, but you need to make the modification in the mentioned file and then run make fix-copies
(can do in a separate PR)
@@ -405,8 +406,10 @@ def fn_recursive_attn_processor(name: str, module: torch.nn.Module, processor): | |||
# Copied from diffusers.models.unets.unet_2d_condition.UNet2DConditionModel.fuse_qkv_projections with FusedAttnProcessor2_0->FusedAuraFlowAttnProcessor2_0 | |||
def fuse_qkv_projections(self): | |||
""" | |||
Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) | |||
are fused. For cross-attention modules, key and value projection matrices are fused. | |||
Enables fused QKV projections. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will cause test to break because it is # Copied from
elsewhere. This would require the implementations and docs to be the same as every other occurrence
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Co-authored-by: Aryan <[email protected]>
What does this PR do?
Fixes #9567
(I reopened PR with same doc since my previous PR may occur sync issue. sorry for confusion!)
Fixes # (issue)
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
This PR tries to attempt a solution at one of the submodules listed in #9567 so I think @a-r-r-o-w is the best to review it. Alongside the same, @charchit7 @yijun-lee and @SubhasmitaSw were also working on the same, so just a ping for the update on the same.