Skip to content

Commit

Permalink
Merge pull request #330 from WISDEM/mpi_v2
Browse files Browse the repository at this point in the history
  • Loading branch information
ptrbortolotti authored Jan 3, 2025
2 parents eedc910 + 00f02c7 commit 29fcb49
Show file tree
Hide file tree
Showing 23 changed files with 617 additions and 566 deletions.
14 changes: 13 additions & 1 deletion .github/workflows/run_exhaustive_examples.yml
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,19 @@ jobs:
run: |
cd weis/test
python run_examples.py
# Run parallel script calling OpenFAST
- name: Run parallel OpenFAST
run: |
cd examples/02_run_openfast_cases
mpiexec -np 2 python weis_driver_loads.py
# Run parallel script calling a simple optimization
- name: Run parallel OpenFAST
run: |
cd examples/03_NREL5MW_OC3_spar
mpiexec -np 2 python weis_driver.py
# Run scripts within rotor_opt folder with MPI
- name: Run parallel examples rotor optimization
run: |
Expand Down
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ Using WEIS
installation
how_weis_works
inputs/yaml_inputs
run_in_parallel


WEIS Visualization APP
Expand Down
93 changes: 93 additions & 0 deletions docs/run_in_parallel.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
Run in parallel
--------------

WEIS can be run sequentially on a single processor. WEIS can also be parallelized to handle larger problems in a timely manner.


Background
------------------------------------

The parallelization of WEIS leverages the [MPI library](https://mpi4py.readthedocs.io/en/stable/). Before proceeding, please make sure that your WEIS environment includes the library mpi4py as discussed in the README.md at the root level of this repo, or on the `WEIS GitHub page <https://github.com/WISDEM/WEIS/>`_.

Parallelization in WEIS happens at two levels:
* The first level is triggered when an optimization is run and a gradient-based optimizer is called. In this case, the OpenMDAO library will execute the finite differences in parallel. OpenMDAO then assembles all partial derivatives in a single Jacobian and iterations progress.
* The second level is triggered when multiple OpenFAST runs are specified. These are executed in parallel.
The two levels of parallelization are integrated and can co-exist as long as sufficient computational resources are available.

WEIS helps you set up the right call to MPI given the available computational resources, see the sections below.


How to run WEIS in parallel
------------------------------------

Running WEIS in parallel is slightly more elaborated than running it sequentially. A first example of a parallel call for a design optimization in WEIS is provided in example `03_NREL5MW_OC3 <https://github.com/WISDEM/WEIS/tree/master/examples/03_NREL5MW_OC3_spar>`_. A second example runs an OpenFAST model in parallel, see example `02_run_openfast_cases <https://github.com/WISDEM/WEIS/tree/develop/examples/02_run_openfast_cases>`_.


Design optimization in parallel
------------------------------------

These instructions follow example 03 `03_NREL5MW_OC3 <https://github.com/WISDEM/WEIS/tree/master/examples/03_NREL5MW_OC3_spar>`_. To run the design optimization in parallel, the first step consists of a pre-processing call to WEIS. This step returns the number of finite differences that are specified given the analysis_options.yaml file. To execute this step, in a terminal window navigate to example 03 and type:

.. code-block:: bash
python weis_driver.py --preMPI=True
In the terminal the code will return the best setup for an optimal MPI call given your problem.

If you are resource constrained, you can pass the keyword argument `--maxnP` and set the maximum number of processors that you have available. If you have 20, type:

.. code-block:: bash
python weis_driver.py --preMPI=True --maxnP=20
These two commands will help you set up the appropriate computational resources.

At this point, you are ready to launch WEIS in parallel. If you have 20 processors, your call to WEIS will look like:

.. code-block:: bash
mpiexec -np 20 python weis_driver.py
Parallelize calls to OpenFAST
------------------------------------

WEIS can be used to run OpenFAST simulations, such as design load cases.
[More information about setting DLCs can be found here](https://github.com/WISDEM/WEIS/blob/docs/docs/dlc_generator.rst)

To do so, WEIS is run with a single function evaluation that fires off a number of OpenFAST simulations.

Let's look at an example in `02_run_openfast_cases <https://github.com/WISDEM/WEIS/tree/develop/examples/02_run_openfast_cases>`_.

In a terminal, navigate to example 02 and type:

.. code-block:: bash
python weis_driver_loads.py --preMPI=True
The terminal should return this message

.. code-block:: bash
Your problem has 0 design variable(s) and 7 OpenFAST run(s)
You are not running a design optimization, a design of experiment, or your optimizer is not gradient based. The number of parallel function evaluations is set to 1
To run the code in parallel with MPI, execute one of the following commands
If you have access to (at least) 8 processors, please call WEIS as:
mpiexec -np 8 python weis_driver.py
If you do not have access to 8 processors
please provide your maximum available number of processors by typing:
python weis_driver.py --preMPI=True --maxnP=xx
And substitute xx with your number of processors
If you have access to 8 processors, you are now ready to execute your script by typing

.. code-block:: bash
mpiexec -np 8 python weis_driver_loads.py
If you have access to fewer processors, say 4, adjust the -np entry accordingly

.. code-block:: bash
mpiexec -np 4 python weis_driver_loads.py
6 changes: 3 additions & 3 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,16 @@ dependencies:
- mat4py
- nlopt
- numpydoc
- openfast>=3.5.3
- openfast=3.5.5
- openraft>=1.2.4
- osqp
- pcrunch
- pip
- pyhams>=1.3
#- pyoptsparse
- rosco>=2.9.4
- rosco=2.9.5
- smt
- wisdem>=3.16.4
- wisdem=3.18.1
- pip:
- dash-bootstrap-components
- dash-mantine-components
Expand Down
3 changes: 3 additions & 0 deletions examples/02_run_openfast_cases/modeling_options.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,9 @@ ROSCO:
flag: True
tuning_yaml: ../01_aeroelasticse/OpenFAST_models/IEA-15-240-RWT/IEA-15-240-RWT-UMaineSemi/IEA15MW-UMaineSemi.yaml
Kp_float: -10
U_pc: [12, 18, 24]
omega_pc: [.1, .1, .1]
zeta_pc: [1.,1.,1.]


DLC_driver:
Expand Down
35 changes: 28 additions & 7 deletions examples/02_run_openfast_cases/weis_driver_loads.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,43 @@
import sys

from weis.glue_code.runWEIS import run_weis
from weis.glue_code.weis_args import weis_args, get_max_procs, set_modopt_procs
from openmdao.utils.mpi import MPI

## File management
run_dir = os.path.dirname( os.path.realpath(__file__) )
fname_wt_input = run_dir + os.sep + 'IEA-15-240-RWT.yaml'
fname_modeling_options = run_dir + os.sep + 'modeling_options_loads.yaml'
fname_analysis_options = run_dir + os.sep + 'analysis_options_loads.yaml'
# Parse args
args = weis_args()

## File management
run_dir = os.path.dirname( os.path.realpath(__file__) )
fname_wt_input = os.path.join(run_dir, 'IEA-15-240-RWT.yaml')
fname_modeling_options = os.path.join(run_dir, 'modeling_options_loads.yaml')
fname_analysis_options = os.path.join(run_dir, 'analysis_options_loads.yaml')

tt = time.time()
wt_opt, modeling_options, opt_options = run_weis(fname_wt_input, fname_modeling_options, fname_analysis_options)
maxnP = get_max_procs(args)

modeling_override = None
if MPI:
# Pre-compute number of cores needed in this run
_, modeling_options, _ = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
prepMPI=True,
maxnP = maxnP)

modeling_override = set_modopt_procs(modeling_options)

# Run WEIS for real now
wt_opt, modeling_options, opt_options = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
modeling_override=modeling_override,
prepMPI=args.preMPI)

if MPI:
rank = MPI.COMM_WORLD.Get_rank()
else:
rank = 0
if rank == 0:
if rank == 0 and args.preMPI == False:
print("Run time: %f"%(time.time()-tt))
sys.stdout.flush()
27 changes: 25 additions & 2 deletions examples/02_run_openfast_cases/weis_driver_rosco_opt.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,11 @@

from weis.glue_code.runWEIS import run_weis
from openmdao.utils.mpi import MPI
from weis.glue_code.weis_args import weis_args, get_max_procs, set_modopt_procs

# Parse args
args = weis_args()


## File management
run_dir = os.path.dirname( os.path.realpath(__file__) )
Expand All @@ -13,12 +18,30 @@


tt = time.time()
wt_opt, modeling_options, opt_options = run_weis(fname_wt_input, fname_modeling_options, fname_analysis_options)
maxnP = get_max_procs(args)

modeling_override = None
if MPI:
# Pre-compute number of cores needed in this run
_, modeling_options, _ = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
prepMPI=True,
maxnP = maxnP)

modeling_override = set_modopt_procs(modeling_options)

# Run WEIS for real now
wt_opt, modeling_options, opt_options = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
modeling_override=modeling_override,
prepMPI=args.preMPI)

if MPI:
rank = MPI.COMM_WORLD.Get_rank()
else:
rank = 0
if rank == 0:
if rank == 0 and args.preMPI == False:
print("Run time: %f"%(time.time()-tt))
sys.stdout.flush()
2 changes: 1 addition & 1 deletion examples/03_NREL5MW_OC3_spar/analysis_options.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ driver:
max_major_iter: 10 # Maximum number of major design iterations (SNOPT)
max_minor_iter: 100 # Maximum number of minor design iterations (SNOPT)
max_iter: 2 # Maximum number of iterations (SLSQP)
solver: LN_COBYLA # Optimization solver. Other options are 'SLSQP' - 'CONMIN'
solver: SLSQP # Optimization solver. Other options are 'SLSQP' - 'CONMIN'
step_size: 1.e-3 # Step size for finite differencing
form: central # Finite differencing mode, either forward or central

Expand Down
4 changes: 2 additions & 2 deletions examples/03_NREL5MW_OC3_spar/modeling_options.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -97,11 +97,11 @@ DLC_driver:
DLCs:
- DLC: "1.1"
ws_bin_size: 2
wind_speed: [14.]
wind_speed: [14.,16., 18., 20., 22.]
wave_height: [7.]
wave_period: [1.]
n_seeds: 1
analysis_time: 20.
analysis_time: 10.
transient_time: 0.
turbulent_wind:
HubHt: 90.0
Expand Down
35 changes: 28 additions & 7 deletions examples/03_NREL5MW_OC3_spar/weis_driver.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,43 @@
import sys

from weis.glue_code.runWEIS import run_weis
from weis.glue_code.weis_args import weis_args, get_max_procs, set_modopt_procs
from openmdao.utils.mpi import MPI

## File management
run_dir = os.path.dirname( os.path.realpath(__file__) )
fname_wt_input = run_dir + os.sep + "nrel5mw-spar_oc3.yaml"
fname_modeling_options = run_dir + os.sep + 'modeling_options.yaml'
fname_analysis_options = run_dir + os.sep + 'analysis_options_noopt.yaml'
# Parse args
args = weis_args()

## File management
run_dir = os.path.dirname( os.path.realpath(__file__) )
fname_wt_input = os.path.join(run_dir, 'nrel5mw-spar_oc3.yaml')
fname_modeling_options = os.path.join(run_dir, 'modeling_options.yaml')
fname_analysis_options = os.path.join(run_dir, 'analysis_options.yaml')

tt = time.time()
wt_opt, modeling_options, opt_options = run_weis(fname_wt_input, fname_modeling_options, fname_analysis_options)
maxnP = get_max_procs(args)

modeling_override = None
if MPI:
# Pre-compute number of cores needed in this run
_, modeling_options, _ = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
prepMPI=True,
maxnP = maxnP)

modeling_override = set_modopt_procs(modeling_options)

# Run WEIS for real now
wt_opt, modeling_options, opt_options = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
modeling_override=modeling_override,
prepMPI=args.preMPI)

if MPI:
rank = MPI.COMM_WORLD.Get_rank()
else:
rank = 0
if rank == 0:
if rank == 0 and args.preMPI == False:
print("Run time: %f"%(time.time()-tt))
sys.stdout.flush()
35 changes: 28 additions & 7 deletions examples/03_NREL5MW_OC3_spar/weis_freq_driver.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,43 @@
import sys

from weis.glue_code.runWEIS import run_weis
from weis.glue_code.weis_args import weis_args, get_max_procs, set_modopt_procs
from openmdao.utils.mpi import MPI

## File management
run_dir = os.path.dirname( os.path.realpath(__file__) )
fname_wt_input = run_dir + os.sep + "nrel5mw-spar_oc3.yaml"
fname_modeling_options = run_dir + os.sep + "modeling_options_freq.yaml"
fname_analysis_options = run_dir + os.sep + "analysis_options_noopt.yaml"
# Parse args
args = weis_args()

## File management
run_dir = os.path.dirname( os.path.realpath(__file__) )
fname_wt_input = os.path.join(run_dir, 'nrel5mw-spar_oc3.yaml')
fname_modeling_options = os.path.join(run_dir, 'modeling_options_freq.yaml')
fname_analysis_options = os.path.join(run_dir, 'analysis_options_noopt.yaml')

tt = time.time()
wt_opt, modeling_options, opt_options = run_weis(fname_wt_input, fname_modeling_options, fname_analysis_options)
maxnP = get_max_procs(args)

modeling_override = None
if MPI:
# Pre-compute number of cores needed in this run
_, modeling_options, _ = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
prepMPI=True,
maxnP = maxnP)

modeling_override = set_modopt_procs(modeling_options)

# Run WEIS for real now
wt_opt, modeling_options, opt_options = run_weis(fname_wt_input,
fname_modeling_options,
fname_analysis_options,
modeling_override=modeling_override,
prepMPI=args.preMPI)

if MPI:
rank = MPI.COMM_WORLD.Get_rank()
else:
rank = 0
if rank == 0:
if rank == 0 and args.preMPI == False:
print("Run time: %f"%(time.time()-tt))
sys.stdout.flush()
Loading

0 comments on commit 29fcb49

Please sign in to comment.