-
Notifications
You must be signed in to change notification settings - Fork 522
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The se_a descriptor cannot utilize all cores when trained on the cpu #4474
Comments
Isn't it 1800%? |
I apologize for the correction, but 1800% is still only 18 cores working. |
In the How do you set the threads? |
begin= module load mkl mpi compiler dp train input.json |
From the |
Summary
When training the dp_sea model on a 64core cpu-only node, the cpu utilization is only about 1800%. How to set it up to further increase the cpu utilization?
DeePMD-kit Version
2.2.10
Backend and its version
v2
Python Version, CUDA Version, GCC Version, LAMMPS Version, etc
python=3.10.13
Details
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
34016 gengxz 20 0 16.1g 636216 102264 S 1779 0.2 1268:54 python
The text was updated successfully, but these errors were encountered: