We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi! :) I wanted to ask if bfloat16 is supported for even faster transformer inference? Thank you
The text was updated successfully, but these errors were encountered:
Hi,
bfloat16 is not supported at this time. There is an open issue in the CTranslate2 repository OpenNMT/CTranslate2#1121.
However, bfloat16 has the same performance as float16 as far as I know. It just has a different numerical range that is helpful for some models.
I'm closing this issue is favor of the one in the CTranslate2 repo.
Sorry, something went wrong.
I see, thank you very much!
No branches or pull requests
Hi! :) I wanted to ask if bfloat16 is supported for even faster transformer inference?
Thank you
The text was updated successfully, but these errors were encountered: