-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug report using gcc and impi on NOAA hera system #123
Comments
Hi Thomas, the fact that the created profile's name isn't specifying a mpi vendor ("[+] Created profile gfdl2024.01") indicates that e4s-cl failed to find either libmpi.so.12, libmpi_cray.so.12 or libmpi.so.40. e4s-cl will try to locate any of these three and will name the newly created profile correspondingly. Could you check if the correct libmpi.so is in your LD_LIBRARY_PATH? |
The idea behind the This is done using a python script to access an MPI library from the environment, load and use well-known symbols to run basic operations to ensure it is working properly and loads all the library it needs to function (As they can sometimes lazy-load libraries). You can see this in action here:
You can see how this is done here. Intel MPI is treated as MPICH as they share ABI and sonames. As Frederick suggested, something is preventing the proper analysis of your MPI environment. Please share the contents of the created profile and, if possible, compile a sample MPI program with this environment and share the output of If you can, try running that tester script in your desired MPI environment and see if it gives you any information about what is failing |
While trying to run an
e4s-cl init
I received an error that said it was an e4s-cl bug, and to report the contents of a debug file on github. Below is the pasted contents of the file:Here are the modules I have loaded:
My container is using gcc 13 and mpich installed with spack.
The text was updated successfully, but these errors were encountered: