You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear DeepMD-kit community,
I am currently working with two different servers - one with CPU nodes and another with GPU nodes. On both servers, I have installed DeepMD-kit using the same command:
conda create -n deepmd deepmd-kit lammps horovod -c conda-forge
I've noticed an interesting behavior regarding precision settings. When I use models trained with default precision settings, the LAMMPS calculations yield identical results whether run on GPU or CPU nodes. However, when using models trained with "precision": "float32", I observe different results between GPU and CPU calculations. The difference is relatively small, approximately -0.0005 eV/atom.
I would greatly appreciate if someone could help me understand:
Why this discrepancy occurs specifically with float32 precision
What potential solutions might be available to ensure consistent results across both computing environments
Thank you in advance for your time and assistance.
Best regards
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Dear DeepMD-kit community,
I am currently working with two different servers - one with CPU nodes and another with GPU nodes. On both servers, I have installed DeepMD-kit using the same command:
conda create -n deepmd deepmd-kit lammps horovod -c conda-forge
I've noticed an interesting behavior regarding precision settings. When I use models trained with default precision settings, the LAMMPS calculations yield identical results whether run on GPU or CPU nodes. However, when using models trained with "precision": "float32", I observe different results between GPU and CPU calculations. The difference is relatively small, approximately -0.0005 eV/atom.
I would greatly appreciate if someone could help me understand:
Why this discrepancy occurs specifically with float32 precision
What potential solutions might be available to ensure consistent results across both computing environments
Thank you in advance for your time and assistance.
Best regards
Beta Was this translation helpful? Give feedback.
All reactions