Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the checkpoint #1

Open
rentainhe opened this issue Nov 5, 2024 · 4 comments
Open

About the checkpoint #1

rentainhe opened this issue Nov 5, 2024 · 4 comments

Comments

@rentainhe
Copy link

rentainhe commented Nov 5, 2024

Hi authors! Thanks so much for releasing such a great work! I was wondering would you like to provide the pytorch checkpoints (as .pth files) instead of the onnx checkpoints?

@Zailushang211
Copy link

�[0;93m2024-11-08 07:12:36.963576075 [W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 48 Memcpy nodes are added to the graph main_graph for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message.�[m
�[0;93m2024-11-08 07:12:36.974287246 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.�[m
�[0;93m2024-11-08 07:12:36.974298274 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.�[m
�[0;93m2024-11-08 07:12:37.634942805 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.�[m
�[0;93m2024-11-08 07:12:37.634962319 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.�[m
0.500493049621582
got warning as above with onnx, and the speed is even slower than on cpu, takes almost twice the time. is it a problem with onnx?

@qjadud1994
Copy link
Collaborator

Thank you for your interest in our work.

We are considering releasing Pytorch checkpoint (but it might take some time).

@qjadud1994
Copy link
Collaborator

�[0;93m2024-11-08 07:12:36.963576075 [W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 48 Memcpy nodes are added to the graph main_graph for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message.�[m �[0;93m2024-11-08 07:12:36.974287246 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.�[m �[0;93m2024-11-08 07:12:36.974298274 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.�[m �[0;93m2024-11-08 07:12:37.634942805 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.�[m �[0;93m2024-11-08 07:12:37.634962319 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.�[m 0.500493049621582 got warning as above with onnx, and the speed is even slower than on cpu, takes almost twice the time. is it a problem with onnx?

I guess it is a problem with onnx.
Please follow the instructions in the onnxruntime installation docs.

@rentainhe
Copy link
Author

Thank you for your interest in our work.

We are considering releasing Pytorch checkpoint (but it might take some time).

Thank you so much, we are really looking forward to the PyTorch weight release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants