Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mpi v2 #330

Merged
merged 28 commits into from
Jan 3, 2025
Merged
Show file tree
Hide file tree
Changes from 21 commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
6488f1a
remove logic for olaf/openmp
ptrbortolotti Dec 18, 2024
a49cc3b
support in setting mpi parameters
ptrbortolotti Dec 19, 2024
f62be3f
add documentation page
ptrbortolotti Dec 19, 2024
00f9713
example for simple call to openfast with mpi
ptrbortolotti Dec 20, 2024
9cb54d3
postpone return so that we can stack the preMPI call and the actual W…
ptrbortolotti Dec 21, 2024
9882405
work in progress, mpi settings moved into modeling options
ptrbortolotti Dec 22, 2024
ca3a4d8
adjust if statements
ptrbortolotti Dec 22, 2024
139263a
broadcast mod and opt options
ptrbortolotti Dec 22, 2024
77ad691
more progress, not there yet
ptrbortolotti Dec 22, 2024
ce527f6
sequential or preMPI and actual weis call now working
ptrbortolotti Dec 24, 2024
86fef37
better, but MPI can still hang
ptrbortolotti Dec 24, 2024
cf4fcee
adjust if settings, things seem to be running fine now
ptrbortolotti Dec 27, 2024
97a6bb1
fix last typos
ptrbortolotti Dec 27, 2024
ee0f6bc
add tests, switch to mpiexec
ptrbortolotti Dec 27, 2024
36d1653
remove sbatch kestrel (can't mantain...) and shorten OF sims
ptrbortolotti Dec 27, 2024
9ad56b5
remove outdated py file
ptrbortolotti Dec 27, 2024
e9aa791
suppress print statements when not needed
ptrbortolotti Dec 27, 2024
a5a71cf
adjust if condition
ptrbortolotti Dec 27, 2024
fcae582
lock openfast wisdem and rosco
ptrbortolotti Dec 27, 2024
005cd35
adjust list of examples run during testing
ptrbortolotti Dec 29, 2024
58b697c
try again
ptrbortolotti Dec 30, 2024
5a9bc96
Tidy up weis_driver_loads
dzalkind Dec 30, 2024
08ebbf3
Print information about modeling options to user
dzalkind Dec 30, 2024
b903ba1
Simplify weis_driver_loads more
dzalkind Dec 30, 2024
fc4cf7c
Make control example case
dzalkind Dec 30, 2024
7aedb7c
Count elements in each design variable
dzalkind Dec 30, 2024
9a99005
bring back weis_driver_model_only
ptrbortolotti Jan 2, 2025
00f02c7
update front scripts examples 3 4 5
ptrbortolotti Jan 2, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 13 additions & 1 deletion .github/workflows/run_exhaustive_examples.yml
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,19 @@ jobs:
run: |
cd weis/test
python run_examples.py


# Run parallel script calling OpenFAST
- name: Run parallel OpenFAST
run: |
cd examples/02_run_openfast_cases
mpiexec -np 2 python weis_driver_loads.py

# Run parallel script calling a simple optimization
- name: Run parallel OpenFAST
run: |
cd examples/03_NREL5MW_OC3_spar
mpiexec -np 2 python weis_driver.py

# Run scripts within rotor_opt folder with MPI
- name: Run parallel examples rotor optimization
run: |
Expand Down
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ Using WEIS
installation
how_weis_works
inputs/yaml_inputs
run_in_parallel


WEIS Visualization APP
Expand Down
93 changes: 93 additions & 0 deletions docs/run_in_parallel.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
Run in parallel
--------------

WEIS can be run sequentially on a single processor. WEIS can also be parallelized to handle larger problems in a timely manner.


Background
------------------------------------

The parallelization of WEIS leverages the [MPI library](https://mpi4py.readthedocs.io/en/stable/). Before proceeding, please make sure that your WEIS environment includes the library mpi4py as discussed in the README.md at the root level of this repo, or on the `WEIS GitHub page <https://github.com/WISDEM/WEIS/>`_.

Parallelization in WEIS happens at two levels:
* The first level is triggered when an optimization is run and a gradient-based optimizer is called. In this case, the OpenMDAO library will execute the finite differences in parallel. OpenMDAO then assembles all partial derivatives in a single Jacobian and iterations progress.
* The second level is triggered when multiple OpenFAST runs are specified. These are executed in parallel.
The two levels of parallelization are integrated and can co-exist as long as sufficient computational resources are available.

WEIS helps you set up the right call to MPI given the available computational resources, see the sections below.


How to run WEIS in parallel
------------------------------------

Running WEIS in parallel is slightly more elaborated than running it sequentially. A first example of a parallel call for a design optimization in WEIS is provided in example `03_NREL5MW_OC3 <https://github.com/WISDEM/WEIS/tree/master/examples/03_NREL5MW_OC3_spar>`_. A second example runs an OpenFAST model in parallel, see example `02_run_openfast_cases <https://github.com/WISDEM/WEIS/tree/develop/examples/02_run_openfast_cases>`_.


Design optimization in parallel
------------------------------------

These instructions follow example 03 `03_NREL5MW_OC3 <https://github.com/WISDEM/WEIS/tree/master/examples/03_NREL5MW_OC3_spar>`_. To run the design optimization in parallel, the first step consists of a pre-processing call to WEIS. This step returns the number of finite differences that are specified given the analysis_options.yaml file. To execute this step, in a terminal window navigate to example 03 and type:

.. code-block:: bash

python weis_driver.py --preMPI=True

In the terminal the code will return the best setup for an optimal MPI call given your problem.

If you are resource constrained, you can pass the keyword argument `--maxnP` and set the maximum number of processors that you have available. If you have 20, type:

.. code-block:: bash

python weis_driver.py --preMPI=True --maxnP=20

These two commands will help you set up the appropriate computational resources.

At this point, you are ready to launch WEIS in parallel. If you have 20 processors, your call to WEIS will look like:

.. code-block:: bash

mpiexec -np 20 python weis_driver.py


Parallelize calls to OpenFAST
------------------------------------

WEIS can be used to run OpenFAST simulations, such as design load cases.
[More information about setting DLCs can be found here](https://github.com/WISDEM/WEIS/blob/docs/docs/dlc_generator.rst)

To do so, WEIS is run with a single function evaluation that fires off a number of OpenFAST simulations.

Let's look at an example in `02_run_openfast_cases <https://github.com/WISDEM/WEIS/tree/develop/examples/02_run_openfast_cases>`_.

In a terminal, navigate to example 02 and type:

.. code-block:: bash
python weis_driver_loads.py --preMPI=True

The terminal should return this message

.. code-block:: bash
Your problem has 0 design variable(s) and 7 OpenFAST run(s)

You are not running a design optimization, a design of experiment, or your optimizer is not gradient based. The number of parallel function evaluations is set to 1

To run the code in parallel with MPI, execute one of the following commands

If you have access to (at least) 8 processors, please call WEIS as:
mpiexec -np 8 python weis_driver.py


If you do not have access to 8 processors
please provide your maximum available number of processors by typing:
python weis_driver.py --preMPI=True --maxnP=xx
And substitute xx with your number of processors

If you have access to 8 processors, you are now ready to execute your script by typing

.. code-block:: bash
mpiexec -np 8 python weis_driver_loads.py

If you have access to fewer processors, say 4, adjust the -np entry accordingly

.. code-block:: bash
mpiexec -np 4 python weis_driver_loads.py
6 changes: 3 additions & 3 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,16 @@ dependencies:
- mat4py
- nlopt
- numpydoc
- openfast>=3.5.3
- openfast=3.5.5
- openraft>=1.2.4
- osqp
- pcrunch
- pip
- pyhams>=1.3
#- pyoptsparse
- rosco>=2.9.4
- rosco=2.9.5
- smt
- wisdem>=3.16.4
- wisdem=3.18.1
- pip:
- dash-bootstrap-components
- dash-mantine-components
Expand Down
59 changes: 53 additions & 6 deletions examples/02_run_openfast_cases/weis_driver_loads.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,19 +6,66 @@
from openmdao.utils.mpi import MPI

## File management
run_dir = os.path.dirname( os.path.realpath(__file__) )
fname_wt_input = run_dir + os.sep + 'IEA-15-240-RWT.yaml'
fname_modeling_options = run_dir + os.sep + 'modeling_options_loads.yaml'
fname_analysis_options = run_dir + os.sep + 'analysis_options_loads.yaml'
run_dir = os.path.dirname( os.path.realpath(__file__) )
fname_wt_input = os.path.join(run_dir, 'IEA-15-240-RWT.yaml')
fname_modeling_options = os.path.join(run_dir, 'modeling_options_loads.yaml')
fname_analysis_options = os.path.join(run_dir, 'analysis_options_loads.yaml')

import argparse
# Set up argument parser
parser = argparse.ArgumentParser(description="Run WEIS driver with flag prepping for MPI run.")
# Add the flag
parser.add_argument("--preMPI", type=bool, default=False, help="Flag for preprocessing MPI settings (True or False).")
parser.add_argument("--maxnP", type=int, default=1, help="Maximum number of processors available.")
# Parse the arguments
args, _ = parser.parse_known_args()
# Use the flag in your script
if args.preMPI:
print("Preprocessor flag is set to True. Running preprocessing setting up MPI run.")
else:
print("Preprocessor flag is set to False. Run WEIS now.")

tt = time.time()
wt_opt, modeling_options, opt_options = run_weis(fname_wt_input, fname_modeling_options, fname_analysis_options)

# Set max number of processes, either set by user or extracted from MPI
if args.preMPI:
maxnP = args.maxnP
else:
if MPI:
maxnP = MPI.COMM_WORLD.Get_size()
else:
maxnP = 1

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want the double-run when MPI is used to be the default for all examples? Or should users do a dry-run and update the modeling options themselves? We're demonstrating both here, which is also fine.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found both use cases useful. expert users might want to know the size of their problem, whereas newer users may just want to run

if args.preMPI:
_, _, _ = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
prepMPI=True,
maxnP = maxnP)
else:
if MPI:
_, modeling_options, _ = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
prepMPI=True,
maxnP = maxnP)

modeling_override = {}
modeling_override['General'] = {}
modeling_override['General']['openfast_configuration'] = {}
modeling_override['General']['openfast_configuration']['nFD'] = modeling_options['General']['openfast_configuration']['nFD']
modeling_override['General']['openfast_configuration']['nOFp'] = modeling_options['General']['openfast_configuration']['nOFp']
else:
modeling_override = None
wt_opt, modeling_options, opt_options = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
modeling_override=modeling_override)

if MPI:
rank = MPI.COMM_WORLD.Get_rank()
else:
rank = 0
if rank == 0:
if rank == 0 and args.preMPI == False:
print("Run time: %f"%(time.time()-tt))
sys.stdout.flush()
2 changes: 1 addition & 1 deletion examples/03_NREL5MW_OC3_spar/analysis_options.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ driver:
max_major_iter: 10 # Maximum number of major design iterations (SNOPT)
max_minor_iter: 100 # Maximum number of minor design iterations (SNOPT)
max_iter: 2 # Maximum number of iterations (SLSQP)
solver: LN_COBYLA # Optimization solver. Other options are 'SLSQP' - 'CONMIN'
solver: SLSQP # Optimization solver. Other options are 'SLSQP' - 'CONMIN'
step_size: 1.e-3 # Step size for finite differencing
form: central # Finite differencing mode, either forward or central

Expand Down
4 changes: 2 additions & 2 deletions examples/03_NREL5MW_OC3_spar/modeling_options.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -97,11 +97,11 @@ DLC_driver:
DLCs:
- DLC: "1.1"
ws_bin_size: 2
wind_speed: [14.]
wind_speed: [14.,16., 18., 20., 22.]
wave_height: [7.]
wave_period: [1.]
n_seeds: 1
analysis_time: 20.
analysis_time: 10.
transient_time: 0.
turbulent_wind:
HubHt: 90.0
Expand Down
59 changes: 53 additions & 6 deletions examples/03_NREL5MW_OC3_spar/weis_driver.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,19 +6,66 @@
from openmdao.utils.mpi import MPI

## File management
run_dir = os.path.dirname( os.path.realpath(__file__) )
fname_wt_input = run_dir + os.sep + "nrel5mw-spar_oc3.yaml"
fname_modeling_options = run_dir + os.sep + 'modeling_options.yaml'
fname_analysis_options = run_dir + os.sep + 'analysis_options_noopt.yaml'
run_dir = os.path.dirname( os.path.realpath(__file__) )
fname_wt_input = os.path.join(run_dir, 'nrel5mw-spar_oc3.yaml')
fname_modeling_options = os.path.join(run_dir, 'modeling_options.yaml')
fname_analysis_options = os.path.join(run_dir, 'analysis_options.yaml')

import argparse
# Set up argument parser
parser = argparse.ArgumentParser(description="Run WEIS driver with flag prepping for MPI run.")
# Add the flag
parser.add_argument("--preMPI", type=bool, default=False, help="Flag for preprocessing MPI settings (True or False).")
parser.add_argument("--maxnP", type=int, default=1, help="Maximum number of processors available.")
# Parse the arguments
args, _ = parser.parse_known_args()
# Use the flag in your script
if args.preMPI:
print("Preprocessor flag is set to True. Running preprocessing setting up MPI run.")
else:
print("Preprocessor flag is set to False. Run WEIS now.")

tt = time.time()
wt_opt, modeling_options, opt_options = run_weis(fname_wt_input, fname_modeling_options, fname_analysis_options)

# Set max number of processes, either set by user or extracted from MPI
if args.preMPI:
maxnP = args.maxnP
else:
if MPI:
maxnP = MPI.COMM_WORLD.Get_size()
else:
maxnP = 1

if args.preMPI:
_, _, _ = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
prepMPI=True,
maxnP = maxnP)
else:
if MPI:
_, modeling_options, _ = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
prepMPI=True,
maxnP = maxnP)

modeling_override = {}
modeling_override['General'] = {}
modeling_override['General']['openfast_configuration'] = {}
modeling_override['General']['openfast_configuration']['nFD'] = modeling_options['General']['openfast_configuration']['nFD']
modeling_override['General']['openfast_configuration']['nOFp'] = modeling_options['General']['openfast_configuration']['nOFp']
else:
modeling_override = None
wt_opt, modeling_options, opt_options = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
modeling_override=modeling_override)

if MPI:
rank = MPI.COMM_WORLD.Get_rank()
else:
rank = 0
if rank == 0:
if rank == 0 and args.preMPI == False:
print("Run time: %f"%(time.time()-tt))
sys.stdout.flush()
Loading
Loading