Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Match and lower ov::Relu #143

Merged
merged 2 commits into from
Jul 19, 2024
Merged

Match and lower ov::Relu #143

merged 2 commits into from
Jul 19, 2024

Conversation

adam-smnk
Copy link
Collaborator

Adds ReLU op matcher and lowering to MLIR named Linalg ops.
Also, adds buffer deallocation passes to prevent memory leaks when temporary buffers are created.

Example with ReLU:

import torch
import torch.nn as nn
import openvino as ov

# Define a synthetic model
class LinearModel(nn.Module):
    def __init__(self, input_size, output_size):
        super(LinearModel, self).__init__()
        self.model = nn.Sequential(
			nn.Linear(input_size, output_size),
			nn.ReLU(),
		)
    def forward(self, a):
        return self.model(a)

# Create an instance of the model
input_size = 10
output_size = 5
model = LinearModel(input_size, output_size)
# Generate random weights
model.model[0].weight.data.normal_(0, 0.01)
model.model[0].bias.data.fill_(0.01)

input_data = torch.tensor(range(1, 10*10+1)).to(torch.float32).view(10, 10)

with torch.no_grad():
    reference = model(input_data)
    print('Reference:\n', reference)

ov_model = ov.convert_model(model, example_input=input_data)

# expect lots of debug output with MLIR fragments before and after lowering in this call:
ov_compiled = ov.compile_model(ov_model, 'CPU')

tested = ov_compiled(input_data)[0]
print('Tested:\n', tested)
diff = reference - tested
print('Reference - Tested:\n', diff)

Adds ReLU op matcher and lowering to MLIR named Linalg ops.
Also, adds buffer deallocation passes to prevent memory leaks
when temporary buffers are created in larger graphs.
@rengolin
Copy link
Collaborator

Can we have this Pytorch example in the repo? Would be nice to share how we're testing them through git.

@adam-smnk
Copy link
Collaborator Author

Can we have this Pytorch example in the repo? Would be nice to share how we're testing them through git.

I'd be happy to gather a few examples. This could be a few random python files in the repo.
But maybe @slyalin has suggestions for better testing or organization approach?

NodePtr elementwise_f32_unary_no_broadcast() {
using namespace ov::pass::pattern;
return wrap_type<Op>({any_input()}, elementwise_f32_unary_no_broadcast_predicate);
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adam-smnk, may I ask you to move the whole thing into a separate file under the op subdirectory as it was implemented for MatMul? If it is not very clear, you can refuse, no problem at all -- I'll do it by myself later and also will reorganizer the part that starts to be a boilerplate.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem, I'll move it there.

@slyalin
Copy link
Owner

slyalin commented Jul 18, 2024

Can we have this Pytorch example in the repo? Would be nice to share how we're testing them through git.

I'd be happy to gather a few examples. This could be a few random python files in the repo. But maybe @slyalin has suggestions for better testing or organization approach?

Please try to put it into https://github.com/slyalin/openvino/blob/mlir/tests/layer_tests/pytorch_tests. Just copy one of the existing tests, and replace a function that implements a target model. There we have some infra for ref/test output comparison and a test can be run in both ov.convert_model mode and in torch.compile modes. Summoning @mvafin for further clarification.

Our tests in MLIR topic are not really pytorch specific, but it is simpler, more attractive and familiar for people to have Pytorch as an input format in this case. So let's keep them as pytorch tests.

@mvafin
Copy link
Collaborator

mvafin commented Jul 19, 2024

@slyalin
Copy link
Owner

slyalin commented Jul 19, 2024

@adam-smnk I think the best place for the test will be here https://github.com/openvinotoolkit/openvino/blob/master/tests/layer_tests/py_frontend_tests/test_torch_frontend.py

Why? That place is too specific for the FE itself and has random stuff to test pytorch FE functionality with less focus on various operations, we need tests like layer tests (multiple layers).

if (std::any_of(inputs.begin(), inputs.end(), [&](const ov::Input<ov::Node>& input) {
for (size_t i = 0; i < output_shape.size(); ++i) {
auto input_shape = input.get_partial_shape();
if (output_shape[i] != input_shape[i])
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO for @slyalin: workaround case when ranges are different for dimensions.

Copy link
Owner

@slyalin slyalin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's merge it "as-is", bring other improvement in a separate PR.

@adam-smnk adam-smnk merged commit b548f56 into slyalin:mlir Jul 19, 2024
13 of 29 checks passed
@mvafin
Copy link
Collaborator

mvafin commented Jul 19, 2024

@adam-smnk I think the best place for the test will be here https://github.com/openvinotoolkit/openvino/blob/master/tests/layer_tests/py_frontend_tests/test_torch_frontend.py

Why? That place is too specific for the FE itself and has random stuff to test pytorch FE functionality with less focus on various operations, we need tests like layer tests (multiple layers).

I think this case is too specific, it can be placed in layer_tests/pytorch_tests too, but relu tests there are done with all other activations. So you would need to add a new file with your test. The place I suggested just allows you to create one specific test for one case, its up to you what you like more

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants