You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run xar --dump-toc-raw I get a piece of the file that is 15572 bytes long but it starts at offset 64 and not 28. Consequently, attempts to hash it produce results that don't match the checksum in the archive.
Type of variable xh is xar_header_ex_t which is 64 bytes large, so this will read 64 bytes from the file. And even though the code recognizes reading too much, the file pointer isn't set back.
Since the code doesn't care about the algorithm name, it should just read out the regular xar_header_t structure. If it then discovers that xh.size is larger than what it read already then it can just skip ahead in the file.
The text was updated successfully, but these errors were encountered:
For the file I generated
xar --dump-header
produces the following output:When I run
xar --dump-toc-raw
I get a piece of the file that is 15572 bytes long but it starts at offset 64 and not 28. Consequently, attempts to hash it produce results that don't match the checksum in the archive.The problem seems to be this line:
Type of variable
xh
isxar_header_ex_t
which is 64 bytes large, so this will read 64 bytes from the file. And even though the code recognizes reading too much, the file pointer isn't set back.Since the code doesn't care about the algorithm name, it should just read out the regular
xar_header_t
structure. If it then discovers thatxh.size
is larger than what it read already then it can just skip ahead in the file.The text was updated successfully, but these errors were encountered: