-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Good First Issue][NNCF]: Add ONNX support of data-free Weight Compression Algorithm #3273
Comments
Hi @kshpv I'd like to work on this issue. I have experience with ONNX and can help implement the weight compression algorithm support for the ONNX backend. |
@XueSongTap, thank you for your interest! Assigned to you. |
I would like to assist with Step 2 - Test the Compression: I have prepared a set of scripts that can be utilized with the following workflow:
Let me know when we are good with the step 1 |
@XueSongTap, how long do you estimate it will take you to complete this task? |
Hi @alexsu52 based on the requirements and my current understanding, I estimate it will take around 2-3 weeks to complete this task: Week 1:
Week 2:
Week 3 (if needed):
I plan to provide regular updates and can adjust the timeline based on feedback. Please let me know if this timeline works or if you have different expectations. |
@XueSongTap, sounds good! We look forward to updates from you. If you have any questions, don't hesitate to ask them. |
Context
NNCF supports OpenVINO, Torch and TorchFX backends for weight compression algorithm -
nncf.compress_weights()
. The goal of this issue is to expand support to the ONNX backend. The structure of the NNCF code is designed in a way that this could be done quite straightforward, but it requires attention to detail.Very important: the runtime target is OpenVINOExecutionProvider provider for ONNXRuntime
What needs to be done?
The task is to implement data-free int8 and uint8 Weight Compression algorithm support which includes:
Implement WeightCompressionAlgoBackend for ONNX:
We already have this implemented for OpenVINO, Torch, and TorchFX, so you can use those as references.
Some methods like
insert_adapters
,_get_statistics_for_weights_compression
, anddump_parameters
can be skipped.The goal is to make sure we can run
nncf.compress_weights(onnx_model)
and get a ONNX model with compressed weights in int8, uin8 formats.Test the Compression:
Ensure that running
nncf.compress_weights(onnx_model)
actually produces a compressed ONNX model.Add Initial Tests:
This is super important to prove that the algorithm works correctly.
There are two types of tests we need:
Conformance Tests: Add a tinyllama_data_free case for ONNX, similar to what we have for OpenVINO. Note: the test, a good starting point is to read readme
Unit Tests: We'll need to add some unit tests. Note: this is a unitetsts for OpenVINO, Torch and TorchFX
This can be split into some subtasks to develop/review faster. This is up to you and can be discussed.
If you have any questions or need guidance, feel free to ask in the comments or reach out to the maintainers.
Example Pull Requests
Adding support of data free for Torch - #2333
Adding support of data free for TorchFX - #2891
Resources
Contact points
@kshpv
The description is not full and will be updated
The text was updated successfully, but these errors were encountered: