[Squeeze] Introduce Squeeze and Unsqueeze hardware operators #1153
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This includes HWCustomOp and HLSBackend specializations of the operators aiming for full ONNX compliance. Adds infrastructure for converting the standard ONNX version of the operators to the FINN dialect, which mostly means transplanting the node into the FINN domain and setting a few type and shape attributes. Adds unit tests in Python, C++ and RTL simulation as well as a simple integration test starting from PyTorch model export.
Proposes a new scheme for registering and importing custom operators into their corresponding module namespace, i.e., the 'custom_op' dictionary used to lookup operators by ONNX domain. This is the same as already proposed in #1040.
Support for these operators might seem unnecessary as they have no real effect on the stream/dataflow. However, they can be useful as a workaround for adapting between datalayouts, for example when combining convolutions (assuming 4-dimensional layouts) and attention operations (working on 3-dimensional, or rather 2-dimensional layouts). I will link some example presenting this later...