About EntropyBottleneck.compress and decompress #315
Replies: 1 comment
-
The model weights used should be the same on both the sender and receiver. So as long as the same distribution table is used for decoding, the lossless entropy decoder should be able to perfectly reconstruct the output. In the case of EntropyBottleneck, the discretized (and quantized) distributions used (for each channel) can be precomputed and given to both the sender and receiver. Thus, they should give the exact same results. The current implementation does this. However, differences in hardware computation of e.g. floating point mean that more complex entropy models like bmshj2018-hyperprior and mbt2018 have more trouble maintaining sender and receiver synchronization. Generally speaking, the way to resolve this would be to ensure the same operations are performed in hardware (e.g. by limiting the allowed precision/etc of the floats to something all devices can handle, and ensuring reductions occur in a deterministic order). Semi-related note: Interestingly, not all JPEG decoders produce the exact same result either, so having exact byte-for-byte display output doesn't seem to be super critical, as long as the output looks the same. |
Beta Was this translation helpful? Give feedback.
-
When using compressai for RANS entropy coding, the decompressed data will only match the original data if the compress and decompress operations are performed using the same EntropyBottleneck instance. However, in practical applications, the compress and decompress operations are usually carried out on the sender and receiver ends, respectively, meaning they are not using the same EntropyBottleneck instance. As a result, the decompressed data does not match the original data sent by the sender. How can this issue be resolved?
Beta Was this translation helpful? Give feedback.
All reactions