Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exporting PARSEQ Recognition Model to ONNX #1468

Open
leduy-it opened this issue Dec 27, 2024 · 0 comments
Open

Exporting PARSEQ Recognition Model to ONNX #1468

leduy-it opened this issue Dec 27, 2024 · 0 comments

Comments

@leduy-it
Copy link

leduy-it commented Dec 27, 2024

请将下面信息填写完整,便于我们快速解决问题,谢谢!

问题描述
请在此处详细的描述报错信息

[Paddle2ONNX] Start to parse PaddlePaddle model...
[Paddle2ONNX] Model file path: ./paddle_inference/weights/paddle_inference/parseq_171224/inference.pdmodel
[Paddle2ONNX] Parameters file path: ./paddle_inference/weights/paddle_inference/parseq_171224/inference.pdiparams
[Paddle2ONNX] Start to parsing Paddle model...
[Paddle2ONNX] DenseTensorArray is not supported.
[Paddle2ONNX] Oops, there are some operators not supported yet, including lod_array_length,memcpy,tensor_array_to_tensor,while,write_to_array,
[ERROR] Due to the unsupported operators, the conversion is aborted.


--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
0   paddle2onnx::Export(char const*, char const*, char**, int*, int, bool, bool, bool, bool, bool, paddle2onnx::CustomOp*, int, char const*, char**, int*, char const*, bool*, bool, char**, int)

----------------------
Error Message Summary:
----------------------
FatalError: `Process abort signal` is detected by the operating system.
  [TimeInfo: *** Aborted at 1735310555 (unix time) try "date -d @1735310555" if you are using GNU date ***]
  [SignalInfo: *** SIGABRT (@0xbb) received by PID 187 (TID 0x727a43df9740) from PID 187 ***]

Aborted (core dumped)

更多信息 :

  • 用于部署的推理引擎: 待更新

  • 为什么需要转换为ONNX格式:Deploying in ONNX Runtime

  • Paddle2ONNX版本:
    Name: paddle2onnx
    Version: 1.3.1

  • 你的联系方式(Email/Wechat/Phone): [email protected]

报错截图

Image

其他信息

  • I am working on integrating PARSEQ OCR recognition with PaddleOCR.

I came across Issue #12, which discusses ONNX export handling in PyTorch. I'm exploring ONNX export for PARSEQ but want to retain AR mode functionality and refinement iterations without compromise.

Would appreciate guidance or suggestions to achieve this.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant