We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
readme里面好像只提到了推理速度,有人测试过精度损失吗,我这边测试fp16都和onnx输出不一致
The text was updated successfully, but these errors were encountered:
bevdet-tensorrt-cpp你可以参考这个项目
Sorry, something went wrong.
我看了下你代码,我engine是trtexec转的,其他的跟我差别不大,onnx模型部分我的part1只提取特征,我看你的有其他外参相关参数,目前我只有fp32结果能够跟onnx对上,fp16和int8都不行
int8量化出现较大误差很正常,fp16在我这里基本没有误差
请问这个项目有实现int8量化吗,我好像只看到了fp16
No branches or pull requests
readme里面好像只提到了推理速度,有人测试过精度损失吗,我这边测试fp16都和onnx输出不一致
The text was updated successfully, but these errors were encountered: