-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Match and lower ov::Relu #143
Conversation
Adds ReLU op matcher and lowering to MLIR named Linalg ops. Also, adds buffer deallocation passes to prevent memory leaks when temporary buffers are created in larger graphs.
Can we have this Pytorch example in the repo? Would be nice to share how we're testing them through git. |
I'd be happy to gather a few examples. This could be a few random python files in the repo. |
NodePtr elementwise_f32_unary_no_broadcast() { | ||
using namespace ov::pass::pattern; | ||
return wrap_type<Op>({any_input()}, elementwise_f32_unary_no_broadcast_predicate); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@adam-smnk, may I ask you to move the whole thing into a separate file under the op
subdirectory as it was implemented for MatMul
? If it is not very clear, you can refuse, no problem at all -- I'll do it by myself later and also will reorganizer the part that starts to be a boilerplate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No problem, I'll move it there.
Please try to put it into https://github.com/slyalin/openvino/blob/mlir/tests/layer_tests/pytorch_tests. Just copy one of the existing tests, and replace a function that implements a target model. There we have some infra for ref/test output comparison and a test can be run in both Our tests in MLIR topic are not really pytorch specific, but it is simpler, more attractive and familiar for people to have Pytorch as an input format in this case. So let's keep them as pytorch tests. |
@adam-smnk I think the best place for the test will be here https://github.com/openvinotoolkit/openvino/blob/master/tests/layer_tests/py_frontend_tests/test_torch_frontend.py |
Why? That place is too specific for the FE itself and has random stuff to test pytorch FE functionality with less focus on various operations, we need tests like layer tests (multiple layers). |
if (std::any_of(inputs.begin(), inputs.end(), [&](const ov::Input<ov::Node>& input) { | ||
for (size_t i = 0; i < output_shape.size(); ++i) { | ||
auto input_shape = input.get_partial_shape(); | ||
if (output_shape[i] != input_shape[i]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO for @slyalin: workaround case when ranges are different for dimensions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's merge it "as-is", bring other improvement in a separate PR.
I think this case is too specific, it can be placed in |
Adds ReLU op matcher and lowering to MLIR named Linalg ops.
Also, adds buffer deallocation passes to prevent memory leaks when temporary buffers are created.
Example with ReLU: