-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Writing dict in uns with many keys is slow #1684
Comments
Hmmm @gregor Sturm I would suspect the issue is that we recursively write the keys' values as their native data type, which means you end up creating thousands of zarr/hdf5 arrays. I'm not really sure we can do much about that at the moment. But with the coming I'm not sure the async/parallel zarr stuff works with v2, but I think it does. |
Thanks for your response! I think we'll just adapt our data format to be more efficient in that case. |
This issue has been automatically marked as stale because it has not had recent activity. |
@grst I had a recent experience with python threadpools speeding up zarr by 2X, but not hdf5. I think hdf5 is already multithreaded under the hood, but if you want to experiment with that, it could be helpful. I may also take a whack at it. The idea would be to have a thread-per-elem-write. File I/O is not subject to the GIL so in theory, this idea could help somewhat especially if you're not compressing. |
I don't think that speedups in the order of 2x would get us far. We now anyway adopted a workaround that gave use >100x speedup. |
Please make sure these conditions are met
Report
Code:
On my machine, this takes 7s to write and 4s to load for a dictionary with only 20k elements.
How hard would it be to make this (significantly) faster?
Additional context
In scirpy, I use dicts of arrays (one index referring to$n$ cells) to store clonotype clusters. The dictionary is not (necessarily) aligned to one of the axes, therefore it's in uns. As we sped up the clonotype clustering steps, saving the object becomes a major bottleneck, as this dict can have several hundreds of thousands of keys.
We could possibly change the dictionary to something more efficient, but that would mean breaking our data format. Therefore I first wanted to check if it can be made faster on the anndata side.
CC @felixpetschko
Versions
The text was updated successfully, but these errors were encountered: