-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mpi v2 #330
Mpi v2 #330
Conversation
ok, this is now ready for review! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ptrbortolotti, this was added to test the model_only
flag. Do you still want to get rid of it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh I missed that... I guess we could reinstate it? I don't have strong opinions...
max_parallel_OF_runs = max([int(np.floor((max_cores - n_DV) / n_DV)), 1]) | ||
n_OF_runs_parallel = min([int(n_OF_runs), max_parallel_OF_runs]) | ||
if not prepMPI: | ||
nFD = modeling_options['General']['openfast_configuration']['nFD'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the user now need to enter nFD
and nOFp
into the modeling options?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this comment is now outdated. the users can provide those two inputs, but it's a lot safer to let WEIS compute them with the --preMPI flag
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so the short answer is no, the user does not need to provide those two inputs
I tried to hide some of the argument and modeling option code we'll repeat in a new file here: 5a9bc96 Do we want to apply the same arguments to all of our examples eventually? |
|
||
tt = time.time() | ||
wt_opt, modeling_options, opt_options = run_weis(fname_wt_input, fname_modeling_options, fname_analysis_options) | ||
maxnP = get_max_procs(args) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want the double-run when MPI is used to be the default for all examples? Or should users do a dry-run and update the modeling options themselves? We're demonstrating both here, which is also fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found both use cases useful. expert users might want to know the size of their problem, whereas newer users may just want to run
Question for all: If we have a DV that's an array, should there be a finite differencing step for each index in the array? This page seems to suggest not, but I don't think it's representative of the use case I suggest because the array value is a connected input/output of the whole group. |
I thought there was. We have at least anecdotal observations that the computational workload in an optimization scales directly (and nonlinearly) with the number of points in a design vector (blade chord/twist, tower diameter/thickness). We had previously counted the DVs based on total vector length. I'd be surprised if there was one finite difference step for the entire variable vector in one shot. |
This was my understanding, as well. I made this update to account for array DVs: 7aedb7c |
sure, although the documentation page only talks about examples 02 and 03 |
Thanks for this update @ptrbortolotti !! I'm good to merge this when you are. I don't think we necessarily need to apply the preMPI arguments to all the examples. I'm okay testing it out on some of our favorite examples and refining the interface from there. |
Correct, there's an FD step for each entry in the DV array so Dan's change correctly accounts for that. If you're using coloring this number might be reduced if there are independent DVs so the FD vector can be perturbed in multiple indices simultaneously (see here for more info). WEIS doesn't use coloring and I don't think it's worth accounting for that here as most of the DVs are not independent in WEIS design cases. |
I like this PR, @ptrbortolotti et al! I went through it and the workflow, examples, and docs all make sense to me. At a minimum, the addition of the |
Purpose
MPI calls to WEIS have always been fragile and always required expert users for a successful setup.
This PR improves the usability of WEIS by simplifying the setup for a MPI calls. The user no longer needs to count the number of design variables (finite differencing) nor OpenFAST calls. These two numbers still need to be estimated, but this is done in a pre-call of WEIS which can be quickly executed right before a real call to WEIS. Notably, the counting of the DVs is done by OpenMDAO and no longer by our own dedicated python function (simpler maintenance for us). Once available, the two numbers are passed among the modeling options, and are used by WEIS to allocate the processors for finite differencing (visible to OpenMDAO) and the processors for OpenFAST calls (hidden from OpenMDAO).
A new page on the documentation shows how to make the calls to WEIS. Nothing changes for users running WEIS on a single processors. New tests have been added.
Many thanks to @johnjasa @dzalkind @gbarter. This PR is a team effort
Type of change
What types of change is it?
Testing
GitHub actions must be executed once NREL/ROSCO#406 is merged
Checklist