Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different outputs for compress function on different devices (Local vs Jetson Nano) #309

Open
nhat120904 opened this issue Sep 26, 2024 · 1 comment

Comments

@nhat120904
Copy link

y_strings = context_model.entropy_bottleneck.compress(q_latent)

Thank you for the great work on this project. I’ve encountered an issue where running the compress function on my local machine produces different results (y_strings) compared to running the same code on a Jetson Nano, using the same input.
Could the differences in output be due to hardware-specific optimizations (e.g., mixed precision on the Jetson Nano) or the framework handling operations differently on different architectures? Do you have any recommendations on how I can ensure consistent outputs between the two devices?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants