You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Below is a minimal example of what I'd like to do. Note that the request size (in voxels) is not a power of 2.
The chunk size for the new array is chosen automatically, in this example it is set to (64, 128, 128) .
Presumably due to the leading dimension of the chunk size being smaller than its counterpart in the request Presumably as the chunk size does not divide the request size, the array on disk has some empty patches from conflicting concurrent write operations on the same chunk.
Can we simply expose the chunk size of a new array in ZarrWrite to avoid this?
The text was updated successfully, but these errors were encountered:
bentaculum
changed the title
ZarrWrite downstream of Scan with multiple workers leads to empty patches in written array
ZarrWrite upstream of Scan with multiple workers leads to empty patches in written array
Jun 22, 2021
You are right, the "empty" or corrupted chunks are a result of non-chunk-aligned parallel writes by the workers spawned by Scan. This is only a problem if there are multiple workers, though.
What we could do is to issue a warning whenever ZarrWrite detects non-aligned writes. What do you think?
Below is a minimal example of what I'd like to do. Note that the request size (in voxels) is not a power of 2.
The chunk size for the new array is chosen automatically, in this example it is set to
(64, 128, 128)
.Presumably due to the leading dimension of the chunk size being smaller than its counterpart in the requestPresumably as the chunk size does not divide the request size, the array on disk has some empty patches from conflicting concurrent write operations on the same chunk.Can we simply expose the chunk size of a new array in
ZarrWrite
to avoid this?The text was updated successfully, but these errors were encountered: