Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于TensorRT FP16和Int8精度问题 #260

Open
ycdhqzhiai opened this issue Jun 26, 2023 · 4 comments
Open

关于TensorRT FP16和Int8精度问题 #260

ycdhqzhiai opened this issue Jun 26, 2023 · 4 comments

Comments

@ycdhqzhiai
Copy link

readme里面好像只提到了推理速度,有人测试过精度损失吗,我这边测试fp16都和onnx输出不一致

@LCH1238
Copy link

LCH1238 commented Jul 13, 2023

bevdet-tensorrt-cpp你可以参考这个项目

@ycdhqzhiai
Copy link
Author

我看了下你代码,我engine是trtexec转的,其他的跟我差别不大,onnx模型部分我的part1只提取特征,我看你的有其他外参相关参数,目前我只有fp32结果能够跟onnx对上,fp16和int8都不行

@LCH1238
Copy link

LCH1238 commented Jul 17, 2023

int8量化出现较大误差很正常,fp16在我这里基本没有误差

@45153
Copy link

45153 commented May 14, 2024

bevdet-tensorrt-cpp你可以参考这个项目

请问这个项目有实现int8量化吗,我好像只看到了fp16

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants