Replies: 1 comment
-
Was this issue resolved. Were you able to run lammps with multiple GPUs? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I've set up an environment using conda with deepmd-gpu-2.2.10 and dpgen. When training with DeePMD-kit using
CUDA_VISIBLE_DEVICES=0,1,2,3 mpirun -l -launcher=fork -hosts=localhost -np 4 dp
, all 4 GPUs are fully utilized.However, when running LAMMPS with the default
lmp
command, only one GPU is used. Attempting to useCUDA_VISIBLE_DEVICES=0,1,2,3 mpirun -np 4 lmp
fails to execute the LAMMPS task.Questions:
Does the default LAMMPS in the conda-built deepmd-gpu-2.2.10 not support MPI parallelization?
But I noticed that
lmp
should be compiled with MPI support when I runlmp -h
Is GPU acceleration not supported in this LAMMPS build? (I noticed the GPU package is not listed in the Installed packages)
How can I enable multi-GPU support for LAMMPS in this environment? Do I have to rebuild and install LAMMPS from the source code with build-in or plugin from DeePMD-kit? My ultimate goal is to fully utilize all GPU resources by running
lmp
in parallel on GPU nodes.Also, I've spent considerable time trying to compile LAMMPS from source, but have consistently failed due to various library or version incompatibility issues. This has been extremely frustrating. Any guidance on properly configuring LAMMPS for multi-GPU usage within the conda-built
environment would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions