-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mpi v2 #330
Merged
Merged
Mpi v2 #330
Changes from 21 commits
Commits
Show all changes
28 commits
Select commit
Hold shift + click to select a range
6488f1a
remove logic for olaf/openmp
ptrbortolotti a49cc3b
support in setting mpi parameters
ptrbortolotti f62be3f
add documentation page
ptrbortolotti 00f9713
example for simple call to openfast with mpi
ptrbortolotti 9cb54d3
postpone return so that we can stack the preMPI call and the actual W…
ptrbortolotti 9882405
work in progress, mpi settings moved into modeling options
ptrbortolotti ca3a4d8
adjust if statements
ptrbortolotti 139263a
broadcast mod and opt options
ptrbortolotti 77ad691
more progress, not there yet
ptrbortolotti ce527f6
sequential or preMPI and actual weis call now working
ptrbortolotti 86fef37
better, but MPI can still hang
ptrbortolotti cf4fcee
adjust if settings, things seem to be running fine now
ptrbortolotti 97a6bb1
fix last typos
ptrbortolotti ee0f6bc
add tests, switch to mpiexec
ptrbortolotti 36d1653
remove sbatch kestrel (can't mantain...) and shorten OF sims
ptrbortolotti 9ad56b5
remove outdated py file
ptrbortolotti e9aa791
suppress print statements when not needed
ptrbortolotti a5a71cf
adjust if condition
ptrbortolotti fcae582
lock openfast wisdem and rosco
ptrbortolotti 005cd35
adjust list of examples run during testing
ptrbortolotti 58b697c
try again
ptrbortolotti 5a9bc96
Tidy up weis_driver_loads
dzalkind 08ebbf3
Print information about modeling options to user
dzalkind b903ba1
Simplify weis_driver_loads more
dzalkind fc4cf7c
Make control example case
dzalkind 7aedb7c
Count elements in each design variable
dzalkind 9a99005
bring back weis_driver_model_only
ptrbortolotti 00f02c7
update front scripts examples 3 4 5
ptrbortolotti File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -20,6 +20,7 @@ Using WEIS | |
installation | ||
how_weis_works | ||
inputs/yaml_inputs | ||
run_in_parallel | ||
|
||
|
||
WEIS Visualization APP | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,93 @@ | ||
Run in parallel | ||
-------------- | ||
|
||
WEIS can be run sequentially on a single processor. WEIS can also be parallelized to handle larger problems in a timely manner. | ||
|
||
|
||
Background | ||
------------------------------------ | ||
|
||
The parallelization of WEIS leverages the [MPI library](https://mpi4py.readthedocs.io/en/stable/). Before proceeding, please make sure that your WEIS environment includes the library mpi4py as discussed in the README.md at the root level of this repo, or on the `WEIS GitHub page <https://github.com/WISDEM/WEIS/>`_. | ||
|
||
Parallelization in WEIS happens at two levels: | ||
* The first level is triggered when an optimization is run and a gradient-based optimizer is called. In this case, the OpenMDAO library will execute the finite differences in parallel. OpenMDAO then assembles all partial derivatives in a single Jacobian and iterations progress. | ||
* The second level is triggered when multiple OpenFAST runs are specified. These are executed in parallel. | ||
The two levels of parallelization are integrated and can co-exist as long as sufficient computational resources are available. | ||
|
||
WEIS helps you set up the right call to MPI given the available computational resources, see the sections below. | ||
|
||
|
||
How to run WEIS in parallel | ||
------------------------------------ | ||
|
||
Running WEIS in parallel is slightly more elaborated than running it sequentially. A first example of a parallel call for a design optimization in WEIS is provided in example `03_NREL5MW_OC3 <https://github.com/WISDEM/WEIS/tree/master/examples/03_NREL5MW_OC3_spar>`_. A second example runs an OpenFAST model in parallel, see example `02_run_openfast_cases <https://github.com/WISDEM/WEIS/tree/develop/examples/02_run_openfast_cases>`_. | ||
|
||
|
||
Design optimization in parallel | ||
------------------------------------ | ||
|
||
These instructions follow example 03 `03_NREL5MW_OC3 <https://github.com/WISDEM/WEIS/tree/master/examples/03_NREL5MW_OC3_spar>`_. To run the design optimization in parallel, the first step consists of a pre-processing call to WEIS. This step returns the number of finite differences that are specified given the analysis_options.yaml file. To execute this step, in a terminal window navigate to example 03 and type: | ||
|
||
.. code-block:: bash | ||
|
||
python weis_driver.py --preMPI=True | ||
|
||
In the terminal the code will return the best setup for an optimal MPI call given your problem. | ||
|
||
If you are resource constrained, you can pass the keyword argument `--maxnP` and set the maximum number of processors that you have available. If you have 20, type: | ||
|
||
.. code-block:: bash | ||
|
||
python weis_driver.py --preMPI=True --maxnP=20 | ||
|
||
These two commands will help you set up the appropriate computational resources. | ||
|
||
At this point, you are ready to launch WEIS in parallel. If you have 20 processors, your call to WEIS will look like: | ||
|
||
.. code-block:: bash | ||
|
||
mpiexec -np 20 python weis_driver.py | ||
|
||
|
||
Parallelize calls to OpenFAST | ||
------------------------------------ | ||
|
||
WEIS can be used to run OpenFAST simulations, such as design load cases. | ||
[More information about setting DLCs can be found here](https://github.com/WISDEM/WEIS/blob/docs/docs/dlc_generator.rst) | ||
|
||
To do so, WEIS is run with a single function evaluation that fires off a number of OpenFAST simulations. | ||
|
||
Let's look at an example in `02_run_openfast_cases <https://github.com/WISDEM/WEIS/tree/develop/examples/02_run_openfast_cases>`_. | ||
|
||
In a terminal, navigate to example 02 and type: | ||
|
||
.. code-block:: bash | ||
python weis_driver_loads.py --preMPI=True | ||
|
||
The terminal should return this message | ||
|
||
.. code-block:: bash | ||
Your problem has 0 design variable(s) and 7 OpenFAST run(s) | ||
|
||
You are not running a design optimization, a design of experiment, or your optimizer is not gradient based. The number of parallel function evaluations is set to 1 | ||
|
||
To run the code in parallel with MPI, execute one of the following commands | ||
|
||
If you have access to (at least) 8 processors, please call WEIS as: | ||
mpiexec -np 8 python weis_driver.py | ||
|
||
|
||
If you do not have access to 8 processors | ||
please provide your maximum available number of processors by typing: | ||
python weis_driver.py --preMPI=True --maxnP=xx | ||
And substitute xx with your number of processors | ||
|
||
If you have access to 8 processors, you are now ready to execute your script by typing | ||
|
||
.. code-block:: bash | ||
mpiexec -np 8 python weis_driver_loads.py | ||
|
||
If you have access to fewer processors, say 4, adjust the -np entry accordingly | ||
|
||
.. code-block:: bash | ||
mpiexec -np 4 python weis_driver_loads.py |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want the double-run when MPI is used to be the default for all examples? Or should users do a dry-run and update the modeling options themselves? We're demonstrating both here, which is also fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found both use cases useful. expert users might want to know the size of their problem, whereas newer users may just want to run