Skip to content

Commit

Permalink
Extend mlir-gen to emit linalg named Ops (libxsmm#933).
Browse files Browse the repository at this point in the history
Adds support to generate linalg named Ops for matmul, bias, relu.
This feature can be controlled using a new flag '--output'.

For example:
To generate generic linalg Ops use '--output=generic"
To generate named linalg Ops use '--output=named"

The default behaviour is to generate linalg generic Ops.

Adds named op test which pass out of the box.

-Adds another option "--keep-generic-matmul" to help generate generic
 matmul when linalg named ops output was chosen.

-Refactors the code.
  • Loading branch information
shahidact committed Jul 18, 2024
1 parent 0d449bd commit 26e3b8f
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 2 deletions.
3 changes: 3 additions & 0 deletions test/Integration/mlir-gen.mlir
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
// MLP with Softmax version
// RUN: mlir-gen --kernel=const --bias --relu --seed=123 --batch=10 --layers=10,10,10 --softmax | tpp-run -e entry -entry-point-result=void
// RUN: not --crash mlir-gen --output=named --kernel=const --bias --relu --seed=123 --batch=10 --layers=10,10,10 --softmax 2>&1 | FileCheck %s --check-prefix=SOFTMAX-TODO
// SOFTMAX-TODO: Linalg named ops for softmax not implemented yet
// SOFTMAX-TODO: UNREACHABLE executed

// MLP without softmax
// RUN: mlir-gen --kernel=const --bias --relu --seed=123 --batch=10 --layers=10,10,10 | tpp-run -e entry -entry-point-result=void
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,4 +128,4 @@ func.func @matmul_sequence_fusion_with_relu(%arg0: tensor<32x64xf32>, %arg1: ten
// CHECK: scf.yield %{{.+}} : tensor<32x32xf32>
// CHECK-NEXT: }

// -----
// -----
3 changes: 2 additions & 1 deletion tools/mlir-gen/MLIRGen.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -517,7 +517,8 @@ Value MLIRGenerator::lowerNamedSoftmax(Value input, Value output) {
return input;

// TODO: Add lowering of softmax to sequence of named Ops

llvm_unreachable("Linalg named ops for softmax not implemented yet");

auto outTy = cast<ShapedType>(input.getType());
// Softmax flops = 4 * M * N = 4 * prod(outputDims)
int64_t softmaxFlops = 1;
Expand Down

0 comments on commit 26e3b8f

Please sign in to comment.