Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[codegen][gpu] Adding support to generic op and flexible layout to pad_to_intrinsics on convolution #20073

Merged
merged 2 commits into from
Feb 25, 2025

Conversation

jerryyin
Copy link
Member

The pad_to_intrinsics pass only support linalg.conv2d op with nhwc_hwcf layout of convolution. This has created inconvenience around taking advantage of other convolution variants for their performance potentials. Once such scenario is the IR from conv_filter_to_channels_last will populate conv2d_nhwc_fhwc represented by linalg.generic.

This PR extend support of the pad_to_intrinsics pass such that other convolution variants including:

  • Those that are represented with linalg.generic
  • Other layouts such as (filter layout of) fhwc fchw

This PR will unblock #19974, and allow us to continue to use pad_to_intrinsics as igemm padding kernel catch up in performance.

@jerryyin jerryyin changed the title Adding support to generic op and flexible layout to pad_to_intrinsics on convolution [codegen][gpu] Adding support to generic op and flexible layout to pad_to_intrinsics on convolution Feb 24, 2025
Copy link
Contributor

@nirvedhmeshram nirvedhmeshram left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Signed-off-by: jerryyin <[email protected]>
@nirvedhmeshram
Copy link
Contributor

nirvedhmeshram commented Feb 24, 2025

Interesting regression : https://github.com/iree-org/iree/actions/runs/13503269777/job/37727576624?pr=20073#step:9:209

I guess we could land this and #19974 together, but I am not able to come up with a hypothesis on why this by itself is so bad, any thoughts?

EDIT : Just thought of a hypothesis, if its padding more convs now and thats causing a slow down, I dont think merging with #19974 will help either.

@nirvedhmeshram
Copy link
Contributor

Actually I see this on main, so might not be anything in this PR,
https://github.com/iree-org/iree/actions/runs/13501782679/job/37724211700

@jerryyin
Copy link
Member Author

jerryyin commented Feb 24, 2025

Yes, I agree with you the culprit must be in that I've made this path too flexible and now it can handle any type of convolution (versus in the past it only deal with linalg.conv2d hwcf variant). I'll try to reproduce locally and see what's going on.

Actually I see this on main, so might not be anything in this PR

Oh wow, thanks for point that out. Let me take a second look at main's CI record too.

@jerryyin
Copy link
Member Author

Per discord discussion, the perf degradation is caused by mi300 switching to cpx mode and irrelevant with this PR.

I'll leave this PR open till tomorrow to merge it in case there's other feedbacks.

@nirvedhmeshram
Copy link
Contributor

@jerryyin Since this is an optional (and experimental pass) it is okay the way it is but one thing to consider is if we should have these two cases where we dont do the padding
https://github.com/iree-org/iree/blob/main/compiler/src/iree/compiler/Dialect/LinalgExt/Utils/Utils.cpp#L432-L442

@jerryyin
Copy link
Member Author

jerryyin commented Feb 24, 2025

@nirvedhmeshram Sounds good, will do.

On a second look, those scenarios has already been blocked by below. Since I preserved this conditional, I don't have to do anything here.

if (convolutionDims->outputChannel.size() != 1 ||
convolutionDims->inputChannel.size() != 1 ||
convolutionDims->filterLoop.size() < 1 ||
convolutionDims->outputImage.size() < 1 ||
convolutionDims->depth.size() != 0) {
return;
}

I was imprecise when mentioned that this PR will allow any type of convolution.

@jerryyin jerryyin merged commit 50ac991 into main Feb 25, 2025
44 of 46 checks passed
@jerryyin jerryyin deleted the users/zyin/support-generic-padtointrinsics branch February 25, 2025 14:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants