Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Value of bound output during training with risk aversion and first stage uncertainty #803

Closed
rjmalves opened this issue Nov 14, 2024 · 6 comments · Fixed by #804
Closed

Comments

@rjmalves
Copy link

Hi Oscar,

When evaluating a simple hydrothermal dispatch problem with SDDP.jl for comparing it's lower bound with other methods, the column bound that is printed in the SDDP.log file contained values that were consistently below what I found that was the optimal value.

However, by calling SDDP.calculate_bound() externally it was possible to obtain a much better value. A sample of code that reproduces this behavior is:

import JuMP, SDDP
import Random
import HiGHS
using JuMP: @variable, @constraint

num_stages = 2
initial_storage = 83.222
num_branchings = 10
Random.seed!(2)
inflows = max.(40 .+ 20 * randn(num_branchings), 0)
risk_measure = SDDP.EAVaR(; lambda=0.5, beta=0.5)

num_iterations = 250

function sp_builder(sp, t)
  @variable(sp, 0 <= storage <= 100, SDDP.State, initial_value = initial_storage)
  @variable(sp, 0 <= hydro_generation <= 60)
  @variable(sp, 0 <= spill <= 200)
  @variable(sp, 0 <= thermal_generation[i=1:2] <= 15)
  @variable(sp, 0 <= deficit <= 75)
  @variable(sp, inflow)

  @constraint(sp, sum(thermal_generation) + hydro_generation + deficit == 75)
  @constraint(sp, storage.in + inflow - hydro_generation - spill == storage.out)

  SDDP.parameterize(sp, inflows) do ω
    JuMP.fix.(inflow, ω)
  end

  SDDP.@stageobjective(sp, spill + 5 * thermal_generation[1] + 10 * thermal_generation[2] + 50 * deficit)
end

model = SDDP.LinearPolicyGraph(sp_builder;
  stages=num_stages,
  sense=:Min,
  optimizer=HiGHS.Optimizer,
  lower_bound=0.0);

SDDP.train(model, iteration_limit=num_iterations, risk_measure=risk_measure)

println("Calculated lower bound: ", round(SDDP.calculate_bound(model; risk_measure=risk_measure), digits=2))

By fixing the noise terms applied to the first stage to a constant value (e.x. = 0), this behavior was no more. Also, while running the same code with risk_measure = SDDP.Expectation(), it did not happen anymore.

While inspecting the source, in the function iteration in algorithm.jl, it seems that the bound that is calculated while training the policy always considers SDDP.Expectation as a risk_measure:

SDDP.jl/src/algorithm.jl

Lines 948 to 950 in f3447e9

@_timeit_threadsafe model.timer_output "calculate_bound" begin
bound = calculate_bound(model)
end

Is this the actual expected behavior? When comparing risk-averse approaches, it seems that the bound column, and also the content of what it printed with functions like SDDP.write_log_to_csv leads the user to think that the evolution of the estimated bound is "worse" than it actually is.

@odow
Copy link
Owner

odow commented Nov 14, 2024

This is expected behaviour. If there is first-stage uncertainty, we do not (and cannot) optimise the risk-adjusted cost at the root node. The risk measures apply only to the cost to go of the first and subsequent nodes.

Note that if you have uncertainty in the first stage, you are in effect solving N independent risk-averse SDDP problems, and minimizing the expectation of their risk-averse bound.

@odow
Copy link
Owner

odow commented Nov 14, 2024

Thinking on this for a few minutes though, perhaps we could add a root_node_risk_measure argument, just for the reporting in the log. We wouldn't do anything differently algorithmically. Butchering notation, but perhaps the argument is that `min{F[cost from root node]} == F[min{E[cost from root node]}].

@bfpc
Copy link
Contributor

bfpc commented Nov 14, 2024

This is expected behaviour. If there is first-stage uncertainty, we do not (and cannot) optimise the risk-adjusted cost at the root node. The risk measures apply only to the cost to go of the first and subsequent nodes.

Note that if you have uncertainty in the first stage, you are in effect solving N independent risk-averse SDDP problems, and minimizing the expectation of their risk-averse bound.

Indeed, the first-stage problems are separable, so minimizing the cost of each realization of $\xi_1$ will lead to the minimum of any risk-measure applied to the (full) objective function at each scenario. So indeed it doesn't matter for the policy, but it does matter for the "true problem" being solved: if we minimize $\rho[c_1 y_1 + V_1(x_{1,out})]$ the value is different from $\mathbb{E}[c_1 y_1 + V_1(x_{1,out})$ (even if the optimal policy is the same). I like the idea of root_node_risk_measure. Is there a place where you discuss first-stage uncertainty in the docs?

On a different note (but not sure it applies), does this have an impact on periodic models? If I understand correctly, the lower bound reported would be the expectation of the first-stage costs (if it has uncertainty, which is probably a often the case in UnicyclicGraphs, right?), and maybe in that case there is a further argument to return the risk-adjusted cost as the lower bound.

@odow
Copy link
Owner

odow commented Nov 14, 2024

Is there a place where you discuss first-stage uncertainty in the docs?

I don't think so. Not in this detail.

does this have an impact on periodic models

Yes this applies to cyclic graphs.

So indeed it doesn't matter for the policy, but it does matter for the "true problem" being solved

Correct. I'm hesitant to go changing existing behaviour, so any new feature has to be opt-in.

I have an idea. I'll make a PR.

@odow
Copy link
Owner

odow commented Nov 14, 2024

Is this what you had in mind? #804

@rjmalves
Copy link
Author

With this I guess that I will have the log that I expected! If this makes sense for you in the codebase, I appreciate it. Thanks, Oscar!

@odow odow closed this as completed in #804 Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants