Skip to content

Latest commit

 

History

History
51 lines (34 loc) · 3.46 KB

CONTRIBUTING.md

File metadata and controls

51 lines (34 loc) · 3.46 KB

Contributing to the ONNX-MLIR project

Building ONNX-MLIR

Up to date info on how to build the project is located in the top directory README.

Since you are interested in contributing code, you should look in the Workflow for detailed step by step directives on how to create a fork, compile it, and then push your changes for review.

Contributors have to sign their code using the Developer Certificate of Origin (DCO); make sure to check our instructions prior to committing code to our repo.

A comprehensive list of documents is found here.

Guides for code generation for ONNX operations

  • A guide on how to add support for a new operation is found here.
  • A guide to use Dialect builder details how to generate Krnl, Affine, MemRef, and Standard Dialect operations here.
  • A guide on how to best report errors is detailed here.
  • Our ONNX dialect is derived from the machine readable ONNX specs. When upgrading the supported opset, or simply adding features to the ONNX dialects such as new verifiers, constant folding, canonicalization, or other such features, we need to regenerate the ONNX TableGen files. See here) on how to proceed in such cases.
  • To add an option to the onnx-mlir command, see instructions here.
  • To test new code, see here for instructions.
  • A guide on how to do constant propagation for ONNX operations is found here.
  • To build and test for specialized accelerator, see here

ONNX-MLIR specific dialects

  • The onnx-mlir project is based on the opset version defined here. This is a reference to a possibly older version of the current version of the ONNX operators defined in the onnx/onnx repo here.
  • The Krnl Dialect is used to lower ONNX operators to MLIR affine. The Krnl Dialect is defined here.
  • To update the internal documentation on our dialects when there are changes, please look for guidance here.

Testing and debugging ONNX-MLIR

  • To test new code, see here for instructions.
  • We have support on how to trace performance issue using instrumentation. Details are found here.
  • We have support to debug numerical errors. See here.

Running ONNX models in Python and C

  • Here is an end to end MNIST example using C++ or python interface link.
  • Here is how to run a compiled model in python link.
  • Here is the C runtime API to run models in C/C++ link.

Documentation

Coordinating support for new ONNX operations

  • Check this issue for status on operations required for ONNX Model Zoo Issue 128.
  • Claim an op that you are working on by adding a comment on this Issue #922.