-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Linearize bilinear terms of lower dual*primal in upper objective #157
base: master
Are you sure you want to change the base?
Conversation
and starting to handle case when set AB is empty (no shared lower primal variables in bilinear upper and lower objectives). checking with jump_conejo2016 but linearized results do not match the bilinear problem results yet
and move more methods to src/bilinear_linearization.jl
still not working quite for Conejo, seem to be getting wrong dual variable indices, also will likely need to Dualize the standard form problem for linearization to work
tests not automated yet
and fill out some doc strings
and clean up (manual) tests
try to get SCIP working on Actions
trying to close and reopen to kick off Actions tests |
@joaquimg I tried to get the tests running on actions by updating the testing framework to use test/Project.toml but it is still expecting the old format:
I can't find any info online on how to upgrade a package to the new Pkg testing framework. At first, Actions was failing because it could not add SCIP. Some guidance on how to get the test suite running would be greatly appreciated. |
This reverts commit f5ac4bc.
…efs)) hopefully addresses joaquimg#158
for example: ```julia m = BilevelModel(); T = 100; nodes = [:a, :b]; @variable(Lower(m), y1[nodes, 1:T] >= 0); @variable(Lower(m), y2[nodes, 1:T] >= 0); @constraint(Lower(m), b[n in nodes, t in 1:T], y1[n,t] + y2[n,t] == 100); @variable(Upper(m), lambda, DualOf(b)) 2-dimensional DenseAxisArray{BilevelJuMP.BilevelVariableRef,2,...} with index sets: Dimension 1, [:a, :b] Dimension 2, Base.OneTo(100) And data, a 2×100 Matrix{BilevelJuMP.BilevelVariableRef}: lambda[1] lambda[3] lambda[5] lambda[7] lambda[9] … lambda[193] lambda[195] lambda[197] lambda[199] lambda[2] lambda[4] lambda[6] lambda[8] lambda[10] lambda[194] lambda[196] lambda[198] lambda[200] ```
@time memoized and threaded find_connected_rows_cols: 21.606794 seconds (2.95 M allocations: 166.152 MiB, 1.61% compilation time) @time with generated recursive_col_search: 1.217451 seconds (1.41 M allocations: 74.890 MiB, 99.92% compilation time) (tests done with first 8 values in upper_var_to_lower_ctr) this commit is in case we need to go back to other methods and test times next commit will (likely) switch to the generated method of recursive_col_search in find_connected_rows_cols (and use the globalV in more places perhaps)
hours of time turned into minutes (e.g. BilevelJuMP.get_all_connected_rows_cols went from 8.5 hours to 5.5 minutes)
was not returning an expression and was returning too soon
the repeat function was repeating the same instance of vector to be filled, resulting in nthreads repeats in each vector (which causes slow downs in downstream code that loop over the outputs of this function)
much faster using I,J,V from findnz in find_connected_rows_cols than prior speed up attempts. also, lower memory impact to use custom dict for memo cache because do not need to save the array input for each call of find_connected_rows_cols
This reverts commit 70a3355.
sped up some methods in linearization of bilinear terms by: - memoizing find_connected_rows_cols - multithreading several for loops - working with lower level sparse coefficient matrix in its rows, cols, vals form (from findnz)
can save time for large models if checks have already been done
new bool arg to main_linearization for checking conditions
- only need one pair of (j,n) from set A for each block in lower level problem - see equ. 23 or 24 in https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9729553
This is getting very nice. |
@joaquimg ready! |
was added for local tests
@joaquimg ^^ |
1 similar comment
@joaquimg ^^ |
Summary
When the upper level objective contains bilinear products of lower dual and primal variables they can be linearized under certain conditions. The changes in this PR checks those conditions and linearizes the biinear terms when possible. The mathematical details can be found in:
http://www.optimization-online.org/DB_HTML/2021/08/8561.html
Changes
BilevelModel
calledlinearize_bilinear_upper_terms
, which is a boolean that defaults tofalse
.jump_conejo2016
test.DualOf
vectors and DenseAxisArrays of constraint referencesAdditional information
There are some additional, manual tests in a new file for the linearization process that I am keeping in the code for now. They will be useful to adapt and add to the automated tests as this new capability is expanded.
For example, currently the linearization process requires that the lower level model be provided by the user in its standard form. However, I believe that we could do the conversion of the lower level to standard form for users (and I have a start on it in the new functions). I will need help with that because I'm not sure how to handle the dualizing and all the maps if we convert the lower level to standard form before building the single level model. (I think that we will need a map from/to:
single level model to/from the dualized standard form lower model to/from the general form primal lower model).