-
Notifications
You must be signed in to change notification settings - Fork 336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrading llvm and stablehlo hash #3053
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good but I had one comment
@@ -48,7 +48,7 @@ Value buildOnnxToTosaPaddingConstOp(mlir::PatternRewriter &rewriter, | |||
|
|||
// Create a new pad vec in the right format | |||
// ONNX : [b1, b2, b3, b4, e1, e2, e3, e4] | |||
// TOSA :[[b1, e1], [b2, e2], [b3, e3], [b4, e4]] | |||
// TOSA :[b1, e1, b2, e2, b3, e3, b4, e4] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jorickert This update was made due to the following error message :
/Users/christophermunoz/repos/cmunoz1/onnx-mlir/test/mlir/conversion/onnx_to_tosa/NN/AveragePool.mlir:139:8: error: 'tosa.pad' op operand #1 must be 1D tensor of 32-bit signless integer or 64-bit signless integer values, but got 'tensor<4x2xi64>'
Just want to make sure this update is fine.... I am not too familiar with TOSA, so asking just in case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LLVM made some updates to TOSA as shown in this PR: llvm/llvm-project#122547
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hamptonm1 thanks for reviewing this, I will assume that you will approve it once you are satisfied with the answers to your question.
Signed-off-by: Christopher Munoz <[email protected]> formatting fixes
272152d
to
3784890
Compare
@AlexandreEichenberger @tungld I think Chris mentioned he was working with you guys to resolve the
|
Signed-off-by: Christopher Munoz <[email protected]>
8b9e8b1
to
c6307f4
Compare
I spoke with Tung again. Created issue #3068 and commented out tests. Will continue the investigation and work to resolve this asap. |
Signed-off-by: Christopher Munoz <[email protected]>
@christopherlmunoz Before I approve, please reach out to @jorickert via slack to answer the question above. I want to make sure the |
I can look at that issue, since it impacts quantization operations. Can we quantize benchmarks without this fix? If we cannot, then this PR should not go in. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, tosa.pad will need to be changed again when the LLVM bump containing llvm/llvm-project#123133 lands
@@ -147,6 +147,8 @@ class ONNXConvOpLoweringToTOSA : public ConversionPattern { | |||
DenseI64ArrayAttr newPads = | |||
rewriter.getDenseI64ArrayAttr({pads[0], pads[2], pads[1], pads[3]}); | |||
|
|||
TypeAttr accType = mlir::TypeAttr::get(rewriter.getF32Type()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could use F16 for here if the datatype of the conv is F16, but using F32 for the accumulator should be generally safe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @jorickert, would you prefer something like this? Otherwise, I'm happy to keep it the same.
Type convType = (resultType.isF16()) ? rewriter.getF16Type() : rewriter.getF32Type();
TypeAttr accType = mlir::TypeAttr::get(convType);`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think this would be better
Signed-off-by: Christopher Munoz <[email protected]>
@AlexandreEichenberger Were you able to resolve the backend test? I know you wanted to fix that before merging this PR. |
Please go ahead with this, if the path that generates the error listed in #3068 is disabled. I.e. not just the test, but the options that generates the offending code. |
llvm: b270525f730be6e7196667925f5a9bfa153262e9
stablehlo: 459e481b