Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TorchFX] Constant folding #3047

Merged

Conversation

daniil-lyakhov
Copy link
Collaborator

@daniil-lyakhov daniil-lyakhov commented Oct 30, 2024

Changes

Constant folding is enabled by default in TorchFX backend

Reason for changes

To align quantizers placement between OV and TorchFX

Related tickets

#2766

Tests

  • test_constant_folding
  • test_constant_folding_with_constraints
  • test_models.py references are updated
  • post_training_quantization/535/ - finished successfully

@github-actions github-actions bot added NNCF PT Pull requests that updates NNCF PyTorch experimental labels Oct 30, 2024
@daniil-lyakhov daniil-lyakhov marked this pull request as ready for review October 30, 2024 13:17
@daniil-lyakhov daniil-lyakhov requested a review from a team as a code owner October 30, 2024 13:17
@alexsu52 alexsu52 requested a review from anzr299 November 1, 2024 04:55
nncf/experimental/torch/fx/constant_folding.py Outdated Show resolved Hide resolved
@@ -802,12 +803,26 @@ def apply_quantization_transformations(model: torch.fx.GraphModule) -> None:
# to make it easier for algorithms to work
# with the target graph BatchNorm operations
# are being fused
fold_constant_except_qdq(model)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you provide an use case / model to justify this change?

@alexsu52 alexsu52 merged commit a3895e5 into openvinotoolkit:develop Nov 7, 2024
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
experimental NNCF PT Pull requests that updates NNCF PyTorch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants