-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dealing with large structures in the phonon workflow for forcefields #754
Comments
Maybe also a more general question but more related to jobflow: can we implement functionality to automatically connect to jobs to one after definition? |
Thanks @JaGeo for opening this. I am linking the jobflow-remote issue, as there is a more detailed description of the problems encountered: Matgenix/jobflow-remote#79. And I believe this is also linked to #515. As mentioned in the other issues, when dealing with forcefields the size and the number of the structures involved is going to be much larger than the typical sizes that we are used to in DFT calculations. This implies a much larger memory footprint and having to deal with i/o for large data sets. So, while JSON serialization is a very practical choice for standard workflows, it quickly becomes a bottleneck as the size increse. In the case of the phonon workflow just calling In my opinion this calls for different a different approach. The first and simple solution is definitely to reduce the number of jobs to minimize the amount of read/write operations from the DB. However, as @JaGeo pointed out in the jobflow-remote issue, each job may still take some time, depending on the kind of potential used. Another issue might be that the memory will increase over time, as forces for more Structures are calculated. This means that the job could go out of memory even after a long running time. But this might be a good solution for a large range of structure sizes. However, I think that another possibility would be to start treating these big data as we would treat a charge density or wavefunction file. I don't have a complete solution, but for example, for the phonon use case we may consider this:
I did not have time to make any test on this kind of approach, but the key points are to avoid reading and dumping the full list of structures as much as possible, using faster format and avoid storing very large data in the DB, if they are not really useful. One more point that it would be worth testing in the case of the phonon flow is how would phonopy behave in the postprocessing part when dealing with 24000 structures. Maybe also in that case the memory could be a bottleneck. It may be worth making a test running all the steps in a single Job and leaving the last one aside, so it would be possible to benchmark phonopy's requirements separately. A very fast Calculator could be used to do this (e.g. EMT). I suppose it might be convenient to know before starting with a real calculation and a proper potential. Did you maybe already do such a test @JaGeo? |
@gpetretto independent of this particular use case, implementing a job with phono3py would require similar considerations. Phono3py can definitely deal with such a large number of structures. |
Good point. Then to me this is even a more compelling reason to focus on a solution that would allow to cover a larger number of cases. If you have 1000000 structures, running them sequentially will not be an option, even if each calculation requires few seconds. Assuming that phono3py can then deal with this number of structures in the postprocessing. It has been some time since I last used it, but it seemed to take quite some time to run even in cases with roughly one thousand structures. |
@gpetretto I have seen posters where people did several ten thousand structures with DFT. I haven't tried it myself. It might still require a large amount of memory, though. |
The current Implementation of the phonon workflow works very well for crystalline structures and DFT. However, in the case of forcefields and larger structures without symmetry, a single job doing everything might be preferrable. Saving all intermediate structure in any kind of database/file takes up more time than the actual computation and might require a massive amount of memory.
I think it would be nice to have such a job in atomate2 as the job could reuse many of the current implementions and functions.
Tagging @gpetretto , as we have recently discussed this in the context of jobflow-remote.
@utf: any comments?
The text was updated successfully, but these errors were encountered: