From 25e1e85d3b43b8371def6970a23acf2d2db57801 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sun, 17 Nov 2024 20:39:35 +0000 Subject: [PATCH] build based on 5d19fd5 --- previews/PR797/.documenter-siteinfo.json | 2 +- previews/PR797/apireference/index.html | 110 +-- previews/PR797/assets/documenter.js | 302 ++++--- previews/PR797/changelog/index.html | 2 +- .../examples/FAST_hydro_thermal/index.html | 10 +- .../FAST_production_management/index.html | 4 +- .../PR797/examples/FAST_quickstart/index.html | 4 +- .../PR797/examples/Hydro_thermal/index.html | 18 +- previews/PR797/examples/SDDP.log | 806 +++++++++--------- previews/PR797/examples/SDDP_0.0.log | 6 +- previews/PR797/examples/SDDP_0.0625.log | 6 +- previews/PR797/examples/SDDP_0.125.log | 6 +- previews/PR797/examples/SDDP_0.25.log | 6 +- previews/PR797/examples/SDDP_0.375.log | 6 +- previews/PR797/examples/SDDP_0.5.log | 6 +- previews/PR797/examples/SDDP_0.625.log | 6 +- previews/PR797/examples/SDDP_0.75.log | 6 +- previews/PR797/examples/SDDP_0.875.log | 6 +- previews/PR797/examples/SDDP_1.0.log | 6 +- .../index.html | 26 +- .../index.html | 20 +- .../index.html | 16 +- .../index.html | 16 +- .../agriculture_mccardle_farm/index.html | 4 +- .../examples/air_conditioning/index.html | 16 +- .../air_conditioning_forward/index.html | 4 +- previews/PR797/examples/all_blacks/index.html | 10 +- .../asset_management_simple/index.html | 22 +- .../asset_management_stagewise/index.html | 26 +- previews/PR797/examples/belief/index.html | 26 +- .../examples/biobjective_hydro/index.html | 64 +- .../examples/booking_management/index.html | 4 +- .../examples/generation_expansion/index.html | 32 +- .../PR797/examples/hydro_valley/index.html | 4 +- .../infinite_horizon_hydro_thermal/index.html | 22 +- .../infinite_horizon_trivial/index.html | 14 +- .../examples/no_strong_duality/index.html | 10 +- .../objective_state_newsvendor/index.html | 303 ++++--- .../examples/sldp_example_one/index.html | 25 +- .../examples/sldp_example_two/index.html | 40 +- .../examples/stochastic_all_blacks/index.html | 12 +- .../examples/the_farmers_problem/index.html | 12 +- .../examples/vehicle_location/index.html | 4 +- previews/PR797/explanation/risk/index.html | 14 +- .../PR797/explanation/theory_intro/index.html | 534 +++++++++--- .../access_previous_variables/index.html | 41 +- .../index.html | 4 +- .../guides/add_a_risk_measure/index.html | 18 +- .../PR797/guides/add_integrality/index.html | 4 +- .../add_multidimensional_noise/index.html | 4 +- .../index.html | 4 +- .../guides/choose_a_stopping_rule/index.html | 4 +- .../guides/create_a_belief_state/index.html | 4 +- .../create_a_general_policy_graph/index.html | 4 +- .../PR797/guides/debug_a_model/index.html | 4 +- .../index.html | 4 +- .../index.html | 4 +- previews/PR797/index.html | 4 +- previews/PR797/objects.inv | Bin 9595 -> 9697 bytes previews/PR797/release_notes/index.html | 2 +- previews/PR797/search_index.js | 2 +- previews/PR797/tutorial/SDDP.log | 423 +++++---- previews/PR797/tutorial/arma/index.html | 59 +- previews/PR797/tutorial/convex.cuts.json | 2 +- .../PR797/tutorial/decision_hazard/index.html | 4 +- .../example_milk_producer/35ea2ce1.svg | 544 ------------ .../example_milk_producer/692fe2c9.svg | 544 ++++++++++++ .../example_milk_producer/77967f8c.svg | 625 -------------- .../example_milk_producer/a499b334.svg | 144 ++++ .../example_milk_producer/aaf230d3.svg | 148 ---- .../example_milk_producer/f19b31b9.svg | 625 ++++++++++++++ .../tutorial/example_milk_producer/index.html | 68 +- .../tutorial/example_newsvendor/368ff150.svg | 37 + .../tutorial/example_newsvendor/868190d0.svg | 100 --- .../tutorial/example_newsvendor/9b0ef075.svg | 37 - .../tutorial/example_newsvendor/c011f69a.svg | 97 +++ .../tutorial/example_newsvendor/index.html | 192 ++--- .../tutorial/example_reservoir/0c9b580a.svg | 86 -- .../{f6caca7e.svg => 1808f44e.svg} | 76 +- .../{4cb1679b.svg => 3623a425.svg} | 64 +- .../{03e69e0e.svg => 3bbadfc7.svg} | 268 +++--- .../{3915e49d.svg => 60d3ffea.svg} | 64 +- .../tutorial/example_reservoir/9ced3c61.svg | 86 ++ .../{64d71310.svg => bca9ef48.svg} | 172 ++-- .../{5c2376a4.svg => d69263ec.svg} | 76 +- .../tutorial/example_reservoir/index.html | 99 +-- .../PR797/tutorial/first_steps/index.html | 28 +- previews/PR797/tutorial/inventory.ipynb | 375 ++++++++ previews/PR797/tutorial/inventory.jl | 192 +++++ .../PR797/tutorial/inventory/0a6e9b84.svg | 57 ++ .../PR797/tutorial/inventory/478eb094.svg | 51 ++ previews/PR797/tutorial/inventory/index.html | 203 +++++ .../tutorial/markov_uncertainty/index.html | 12 +- previews/PR797/tutorial/mdps/index.html | 20 +- .../tutorial/objective_states/index.html | 43 +- .../tutorial/objective_uncertainty/index.html | 16 +- previews/PR797/tutorial/pglib_opf/index.html | 42 +- .../plotting/{a25a83af.svg => f424d521.svg} | 130 +-- previews/PR797/tutorial/plotting/index.html | 12 +- previews/PR797/tutorial/spaghetti_plot.html | 2 +- previews/PR797/tutorial/warnings/index.html | 18 +- 101 files changed, 4934 insertions(+), 3612 deletions(-) delete mode 100644 previews/PR797/tutorial/example_milk_producer/35ea2ce1.svg create mode 100644 previews/PR797/tutorial/example_milk_producer/692fe2c9.svg delete mode 100644 previews/PR797/tutorial/example_milk_producer/77967f8c.svg create mode 100644 previews/PR797/tutorial/example_milk_producer/a499b334.svg delete mode 100644 previews/PR797/tutorial/example_milk_producer/aaf230d3.svg create mode 100644 previews/PR797/tutorial/example_milk_producer/f19b31b9.svg create mode 100644 previews/PR797/tutorial/example_newsvendor/368ff150.svg delete mode 100644 previews/PR797/tutorial/example_newsvendor/868190d0.svg delete mode 100644 previews/PR797/tutorial/example_newsvendor/9b0ef075.svg create mode 100644 previews/PR797/tutorial/example_newsvendor/c011f69a.svg delete mode 100644 previews/PR797/tutorial/example_reservoir/0c9b580a.svg rename previews/PR797/tutorial/example_reservoir/{f6caca7e.svg => 1808f44e.svg} (84%) rename previews/PR797/tutorial/example_reservoir/{4cb1679b.svg => 3623a425.svg} (85%) rename previews/PR797/tutorial/example_reservoir/{03e69e0e.svg => 3bbadfc7.svg} (71%) rename previews/PR797/tutorial/example_reservoir/{3915e49d.svg => 60d3ffea.svg} (85%) create mode 100644 previews/PR797/tutorial/example_reservoir/9ced3c61.svg rename previews/PR797/tutorial/example_reservoir/{64d71310.svg => bca9ef48.svg} (84%) rename previews/PR797/tutorial/example_reservoir/{5c2376a4.svg => d69263ec.svg} (85%) create mode 100644 previews/PR797/tutorial/inventory.ipynb create mode 100644 previews/PR797/tutorial/inventory.jl create mode 100644 previews/PR797/tutorial/inventory/0a6e9b84.svg create mode 100644 previews/PR797/tutorial/inventory/478eb094.svg create mode 100644 previews/PR797/tutorial/inventory/index.html rename previews/PR797/tutorial/plotting/{a25a83af.svg => f424d521.svg} (84%) diff --git a/previews/PR797/.documenter-siteinfo.json b/previews/PR797/.documenter-siteinfo.json index fd66ad0ae..11ea77fac 100644 --- a/previews/PR797/.documenter-siteinfo.json +++ b/previews/PR797/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-10-25T03:47:03","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-17T20:39:21","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/previews/PR797/apireference/index.html b/previews/PR797/apireference/index.html index 6ad114b60..c1c019abf 100644 --- a/previews/PR797/apireference/index.html +++ b/previews/PR797/apireference/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

API Reference

Policy graphs

SDDP.GraphType
Graph(root_node::T) where T

Create an empty graph struture with the root node root_node.

Example

julia> graph = SDDP.Graph(0)
+

API Reference

Policy graphs

SDDP.GraphType
Graph(root_node::T) where T

Create an empty graph struture with the root node root_node.

Example

julia> graph = SDDP.Graph(0)
 Root
  0
 Nodes
@@ -25,7 +25,7 @@
 Nodes
  {}
 Arcs
- {}
source
SDDP.add_nodeFunction
add_node(graph::Graph{T}, node::T) where {T}

Add a node to the graph graph.

Examples

julia> graph = SDDP.Graph(:root);
+ {}
source
SDDP.add_nodeFunction
add_node(graph::Graph{T}, node::T) where {T}

Add a node to the graph graph.

Examples

julia> graph = SDDP.Graph(:root);
 
 julia> SDDP.add_node(graph, :A)
 
@@ -45,7 +45,7 @@
 Nodes
  2
 Arcs
- {}
source
SDDP.add_edgeFunction
add_edge(graph::Graph{T}, edge::Pair{T, T}, probability::Float64) where {T}

Add an edge to the graph graph.

Examples

julia> graph = SDDP.Graph(0);
+ {}
source
SDDP.add_edgeFunction
add_edge(graph::Graph{T}, edge::Pair{T, T}, probability::Float64) where {T}

Add an edge to the graph graph.

Examples

julia> graph = SDDP.Graph(0);
 
 julia> SDDP.add_node(graph, 1)
 
@@ -69,7 +69,7 @@
 Nodes
  A
 Arcs
- root => A w.p. 1.0
source
SDDP.add_ambiguity_setFunction
add_ambiguity_set(
     graph::Graph{T},
     set::Vector{T},
     lipschitz::Vector{Float64},
@@ -102,7 +102,7 @@
  2 => 3 w.p. 1.0
 Partitions
  {1, 2}
- {3}
source
add_ambiguity_set(graph::Graph{T}, set::Vector{T}, lipschitz::Float64)

Add set to the belief partition of graph.

lipschitz is a Lipschitz constant for each node in set. The Lipschitz constant is the maximum slope of the cost-to-go function with respect to the belief state associated with each node at any point in the state-space.

Examples

julia> graph = SDDP.LinearGraph(3);
+ {3}
source
add_ambiguity_set(graph::Graph{T}, set::Vector{T}, lipschitz::Float64)

Add set to the belief partition of graph.

lipschitz is a Lipschitz constant for each node in set. The Lipschitz constant is the maximum slope of the cost-to-go function with respect to the belief state associated with each node at any point in the state-space.

Examples

julia> graph = SDDP.LinearGraph(3);
 
 julia> SDDP.add_ambiguity_set(graph, [1, 2], 1e3)
 
@@ -121,7 +121,7 @@
  2 => 3 w.p. 1.0
 Partitions
  {1, 2}
- {3}
source
SDDP.LinearGraphFunction
LinearGraph(stages::Int)

Create a linear graph with stages number of nodes.

Examples

julia> graph = SDDP.LinearGraph(3)
+ {3}
source
SDDP.LinearGraphFunction
LinearGraph(stages::Int)

Create a linear graph with stages number of nodes.

Examples

julia> graph = SDDP.LinearGraph(3)
 Root
  0
 Nodes
@@ -131,7 +131,7 @@
 Arcs
  0 => 1 w.p. 1.0
  1 => 2 w.p. 1.0
- 2 => 3 w.p. 1.0
source
SDDP.MarkovianGraphFunction
MarkovianGraph(transition_matrices::Vector{Matrix{Float64}})

Construct a Markovian graph from the vector of transition matrices.

transition_matrices[t][i, j] gives the probability of transitioning from Markov state i in stage t - 1 to Markov state j in stage t.

The dimension of the first transition matrix should be (1, N), and transition_matrics[1][1, i] is the probability of transitioning from the root node to the Markov state i.

Examples

julia> graph = SDDP.MarkovianGraph([ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]])
+ 2 => 3 w.p. 1.0
source
SDDP.MarkovianGraphFunction
MarkovianGraph(transition_matrices::Vector{Matrix{Float64}})

Construct a Markovian graph from the vector of transition matrices.

transition_matrices[t][i, j] gives the probability of transitioning from Markov state i in stage t - 1 to Markov state j in stage t.

The dimension of the first transition matrix should be (1, N), and transition_matrics[1][1, i] is the probability of transitioning from the root node to the Markov state i.

Examples

julia> graph = SDDP.MarkovianGraph([ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]])
 Root
  (0, 1)
 Nodes
@@ -147,7 +147,7 @@
  (2, 1) => (3, 1) w.p. 0.8
  (2, 1) => (3, 2) w.p. 0.2
  (2, 2) => (3, 1) w.p. 0.2
- (2, 2) => (3, 2) w.p. 0.8
source
MarkovianGraph(;
+ (2, 2) => (3, 2) w.p. 0.8
source
MarkovianGraph(;
     stages::Int,
     transition_matrix::Matrix{Float64},
     root_node_transition::Vector{Float64},
@@ -175,11 +175,11 @@
  (2, 1) => (3, 1) w.p. 0.8
  (2, 1) => (3, 2) w.p. 0.2
  (2, 2) => (3, 1) w.p. 0.2
- (2, 2) => (3, 2) w.p. 0.8
source
MarkovianGraph(
+ (2, 2) => (3, 2) w.p. 0.8
source
MarkovianGraph(
     simulator::Function;
     budget::Union{Int,Vector{Int}},
     scenarios::Int = 1000,
-)

Construct a Markovian graph by fitting Markov chain to scenarios generated by simulator().

budget is the total number of nodes in the resulting Markov chain. This can either be specified as a single Int, in which case we will attempt to intelligently distributed the nodes between stages. Alternatively, budget can be a Vector{Int}, which details the number of Markov state to have in each stage.

source
SDDP.UnicyclicGraphFunction
UnicyclicGraph(discount_factor::Float64; num_nodes::Int = 1)

Construct a graph composed of num_nodes nodes that form a single cycle, with a probability of discount_factor of continuing the cycle.

Examples

julia> graph = SDDP.UnicyclicGraph(0.9; num_nodes = 2)
+)

Construct a Markovian graph by fitting Markov chain to scenarios generated by simulator().

budget is the total number of nodes in the resulting Markov chain. This can either be specified as a single Int, in which case we will attempt to intelligently distributed the nodes between stages. Alternatively, budget can be a Vector{Int}, which details the number of Markov state to have in each stage.

source
SDDP.UnicyclicGraphFunction
UnicyclicGraph(discount_factor::Float64; num_nodes::Int = 1)

Construct a graph composed of num_nodes nodes that form a single cycle, with a probability of discount_factor of continuing the cycle.

Examples

julia> graph = SDDP.UnicyclicGraph(0.9; num_nodes = 2)
 Root
  0
 Nodes
@@ -188,7 +188,7 @@
 Arcs
  0 => 1 w.p. 1.0
  1 => 2 w.p. 1.0
- 2 => 1 w.p. 0.9
source
SDDP.LinearPolicyGraphFunction
LinearPolicyGraph(builder::Function; stages::Int, kwargs...)

Create a linear policy graph with stages number of stages.

Keyword arguments

  • stages: the number of stages in the graph

  • kwargs: other keyword arguments are passed to SDDP.PolicyGraph.

Examples

julia> SDDP.LinearPolicyGraph(; stages = 2, lower_bound = 0.0) do sp, t
+ 2 => 1 w.p. 0.9
source
SDDP.LinearPolicyGraphFunction
LinearPolicyGraph(builder::Function; stages::Int, kwargs...)

Create a linear policy graph with stages number of stages.

Keyword arguments

  • stages: the number of stages in the graph

  • kwargs: other keyword arguments are passed to SDDP.PolicyGraph.

Examples

julia> SDDP.LinearPolicyGraph(; stages = 2, lower_bound = 0.0) do sp, t
     # ... build model ...
 end
 A policy graph with 2 nodes.
@@ -198,7 +198,7 @@
     # ... build model ...
 end
 A policy graph with 2 nodes.
-Node indices: 1, 2
source
SDDP.MarkovianPolicyGraphFunction
MarkovianPolicyGraph(
     builder::Function;
     transition_matrices::Vector{Array{Float64,2}},
     kwargs...
@@ -215,7 +215,7 @@
     # ... build model ...
 end
 A policy graph with 5 nodes.
- Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)
source
SDDP.PolicyGraphType
PolicyGraph(
     builder::Function,
     graph::Graph{T};
     sense::Symbol = :Min,
@@ -237,28 +237,28 @@
     optimizer = HiGHS.Optimizer,
 ) do subproblem, index
     # ... subproblem definitions ...
-end
source

Subproblem definition

SDDP.@stageobjectiveMacro
@stageobjective(subproblem, expr)

Set the stage-objective of subproblem to expr.

Examples

@stageobjective(subproblem, 2x + y)
source

Subproblem definition

SDDP.@stageobjectiveMacro
@stageobjective(subproblem, expr)

Set the stage-objective of subproblem to expr.

Examples

@stageobjective(subproblem, 2x + y)
source
SDDP.parameterizeFunction
parameterize(
     modify::Function,
     subproblem::JuMP.Model,
     realizations::Vector{T},
     probability::Vector{Float64} = fill(1.0 / length(realizations))
 ) where {T}

Add a parameterization function modify to subproblem. The modify function takes one argument and modifies subproblem based on the realization of the noise sampled from realizations with corresponding probabilities probability.

In order to conduct an out-of-sample simulation, modify should accept arguments that are not in realizations (but still of type T).

Examples

SDDP.parameterize(subproblem, [1, 2, 3], [0.4, 0.3, 0.3]) do ω
     JuMP.set_upper_bound(x, ω)
-end
source
parameterize(node::Node, noise)

Parameterize node node with the noise noise.

source
SDDP.add_objective_stateFunction
add_objective_state(update::Function, subproblem::JuMP.Model; kwargs...)

Add an objective state variable to subproblem.

Required kwargs are:

  • initial_value: The initial value of the objective state variable at the root node.
  • lipschitz: The lipschitz constant of the objective state variable.

Setting a tight value for the lipschitz constant can significantly improve the speed of convergence.

Optional kwargs are:

  • lower_bound: A valid lower bound for the objective state variable. Can be -Inf.
  • upper_bound: A valid upper bound for the objective state variable. Can be +Inf.

Setting tight values for these optional variables can significantly improve the speed of convergence.

If the objective state is N-dimensional, each keyword argument must be an NTuple{N,Float64}. For example, initial_value = (0.0, 1.0).

source
SDDP.NoiseType
Noise(support, probability)

An atom of a discrete random variable at the point of support support and associated probability probability.

source

Training the policy

SDDP.add_objective_stateFunction
add_objective_state(update::Function, subproblem::JuMP.Model; kwargs...)

Add an objective state variable to subproblem.

Required kwargs are:

  • initial_value: The initial value of the objective state variable at the root node.
  • lipschitz: The lipschitz constant of the objective state variable.

Setting a tight value for the lipschitz constant can significantly improve the speed of convergence.

Optional kwargs are:

  • lower_bound: A valid lower bound for the objective state variable. Can be -Inf.
  • upper_bound: A valid upper bound for the objective state variable. Can be +Inf.

Setting tight values for these optional variables can significantly improve the speed of convergence.

If the objective state is N-dimensional, each keyword argument must be an NTuple{N,Float64}. For example, initial_value = (0.0, 1.0).

source
SDDP.NoiseType
Noise(support, probability)

An atom of a discrete random variable at the point of support support and associated probability probability.

source

Training the policy

SDDP.numerical_stability_reportFunction
numerical_stability_report(
     [io::IO = stdout,]
     model::PolicyGraph;
     by_node::Bool = false,
     print::Bool = true,
     warn::Bool = true,
-)

Print a report identifying possible numeric stability issues.

Keyword arguments

  • If by_node, print a report for each node in the graph.

  • If print, print to io.

  • If warn, warn if the coefficients may cause numerical issues.

source
SDDP.trainFunction
SDDP.train(model::PolicyGraph; kwargs...)

Train the policy for model.

Keyword arguments

  • iteration_limit::Int: number of iterations to conduct before termination.

  • time_limit::Float64: number of seconds to train before termination.

  • stoping_rules: a vector of SDDP.AbstractStoppingRules. Defaults to SimulationStoppingRule.

  • print_level::Int: control the level of printing to the screen. Defaults to 1. Set to 0 to disable all printing.

  • log_file::String: filepath at which to write a log of the training progress. Defaults to SDDP.log.

  • log_frequency::Int: control the frequency with which the logging is outputted (iterations/log). It must be at least 1. Defaults to 1.

  • log_every_seconds::Float64: control the frequency with which the logging is outputted (seconds/log). Defaults to 0.0.

  • log_every_iteration::Bool; over-rides log_frequency and log_every_seconds to force every iteration to be printed. Defaults to false.

  • run_numerical_stability_report::Bool: generate (and print) a numerical stability report prior to solve. Defaults to true.

  • refine_at_similar_nodes::Bool: if SDDP can detect that two nodes have the same children, it can cheaply add a cut discovered at one to the other. In almost all cases this should be set to true.

  • cut_deletion_minimum::Int: the minimum number of cuts to cache before deleting cuts from the subproblem. The impact on performance is solver specific; however, smaller values result in smaller subproblems (and therefore quicker solves), at the expense of more time spent performing cut selection.

  • risk_measure: the risk measure to use at each node. Defaults to Expectation.

  • sampling_scheme: a sampling scheme to use on the forward pass of the algorithm. Defaults to InSampleMonteCarlo.

  • backward_sampling_scheme: a backward pass sampling scheme to use on the backward pass of the algorithm. Defaults to CompleteSampler.

  • cut_type: choose between SDDP.SINGLE_CUT and SDDP.MULTI_CUT versions of SDDP.

  • dashboard::Bool: open a visualization of the training over time. Defaults to false.

  • parallel_scheme::AbstractParallelScheme: specify a scheme for solving in parallel. Defaults to Threaded().

  • forward_pass::AbstractForwardPass: specify a scheme to use for the forward passes.

  • forward_pass_resampling_probability::Union{Nothing,Float64}: set to a value in (0, 1) to enable RiskAdjustedForwardPass. Defaults to nothing (disabled).

  • add_to_existing_cuts::Bool: set to true to allow training a model that was previously trained. Defaults to false.

  • duality_handler::AbstractDualityHandler: specify a duality handler to use when creating cuts.

  • post_iteration_callback::Function: a callback with the signature post_iteration_callback(::IterationResult) that is evaluated after each iteration of the algorithm.

There is also a special option for infinite horizon problems

  • cycle_discretization_delta: the maximum distance between states allowed on the forward pass. This is for advanced users only and needs to be used in conjunction with a different sampling_scheme.
source
SDDP.write_cuts_to_fileFunction
write_cuts_to_file(
+)

Print a report identifying possible numeric stability issues.

Keyword arguments

  • If by_node, print a report for each node in the graph.

  • If print, print to io.

  • If warn, warn if the coefficients may cause numerical issues.

source
SDDP.trainFunction
SDDP.train(model::PolicyGraph; kwargs...)

Train the policy for model.

Keyword arguments

  • iteration_limit::Int: number of iterations to conduct before termination.

  • time_limit::Float64: number of seconds to train before termination.

  • stoping_rules: a vector of SDDP.AbstractStoppingRules. Defaults to SimulationStoppingRule.

  • print_level::Int: control the level of printing to the screen. Defaults to 1. Set to 0 to disable all printing.

  • log_file::String: filepath at which to write a log of the training progress. Defaults to SDDP.log.

  • log_frequency::Int: control the frequency with which the logging is outputted (iterations/log). It must be at least 1. Defaults to 1.

  • log_every_seconds::Float64: control the frequency with which the logging is outputted (seconds/log). Defaults to 0.0.

  • log_every_iteration::Bool; over-rides log_frequency and log_every_seconds to force every iteration to be printed. Defaults to false.

  • run_numerical_stability_report::Bool: generate (and print) a numerical stability report prior to solve. Defaults to true.

  • refine_at_similar_nodes::Bool: if SDDP can detect that two nodes have the same children, it can cheaply add a cut discovered at one to the other. In almost all cases this should be set to true.

  • cut_deletion_minimum::Int: the minimum number of cuts to cache before deleting cuts from the subproblem. The impact on performance is solver specific; however, smaller values result in smaller subproblems (and therefore quicker solves), at the expense of more time spent performing cut selection.

  • risk_measure: the risk measure to use at each node. Defaults to Expectation.

  • root_node_risk_measure::AbstractRiskMeasure: the risk measure to use at the root node when computing the Bound column. Note that the choice of this option does not change the primal policy, and it applies only if the transition from the root node to the first stage is stochastic. Defaults to Expectation.

  • sampling_scheme: a sampling scheme to use on the forward pass of the algorithm. Defaults to InSampleMonteCarlo.

  • backward_sampling_scheme: a backward pass sampling scheme to use on the backward pass of the algorithm. Defaults to CompleteSampler.

  • cut_type: choose between SDDP.SINGLE_CUT and SDDP.MULTI_CUT versions of SDDP.

  • dashboard::Bool: open a visualization of the training over time. Defaults to false.

  • parallel_scheme::AbstractParallelScheme: specify a scheme for solving in parallel. Defaults to Threaded().

  • forward_pass::AbstractForwardPass: specify a scheme to use for the forward passes.

  • forward_pass_resampling_probability::Union{Nothing,Float64}: set to a value in (0, 1) to enable RiskAdjustedForwardPass. Defaults to nothing (disabled).

  • add_to_existing_cuts::Bool: set to true to allow training a model that was previously trained. Defaults to false.

  • duality_handler::AbstractDualityHandler: specify a duality handler to use when creating cuts.

  • post_iteration_callback::Function: a callback with the signature post_iteration_callback(::IterationResult) that is evaluated after each iteration of the algorithm.

There is also a special option for infinite horizon problems

  • cycle_discretization_delta: the maximum distance between states allowed on the forward pass. This is for advanced users only and needs to be used in conjunction with a different sampling_scheme.
source
SDDP.write_cuts_to_fileFunction
write_cuts_to_file(
     model::PolicyGraph{T},
     filename::String;
     kwargs...,
-) where {T}

Write the cuts that form the policy in model to filename in JSON format.

Keyword arguments

  • node_name_parser is a function which converts the name of each node into a string representation. It has the signature: node_name_parser(::T)::String.

  • write_only_selected_cuts write only the selected cuts to the json file. Defaults to false.

See also SDDP.read_cuts_from_file.

source
SDDP.read_cuts_from_fileFunction
read_cuts_from_file(
+) where {T}

Write the cuts that form the policy in model to filename in JSON format.

Keyword arguments

  • node_name_parser is a function which converts the name of each node into a string representation. It has the signature: node_name_parser(::T)::String.

  • write_only_selected_cuts write only the selected cuts to the json file. Defaults to false.

See also SDDP.read_cuts_from_file.

source
SDDP.read_cuts_from_fileFunction
read_cuts_from_file(
     model::PolicyGraph{T},
     filename::String;
     kwargs...,
-) where {T}

Read cuts (saved using SDDP.write_cuts_to_file) from filename into model.

Since T can be an arbitrary Julia type, the conversion to JSON is lossy. When reading, read_cuts_from_file only supports T=Int, T=NTuple{N, Int}, and T=Symbol. If you have manually created a policy graph with a different node type T, provide a function node_name_parser with the signature

Keyword arguments

  • node_name_parser(T, name::String)::T where {T} that returns the name of each node given the string name name. If node_name_parser returns nothing, those cuts are skipped.

  • cut_selection::Bool run or not the cut selection algorithm when adding the cuts to the model.

See also SDDP.write_cuts_to_file.

source
SDDP.write_log_to_csvFunction
write_log_to_csv(model::PolicyGraph, filename::String)

Write the log of the most recent training to a csv for post-analysis.

Assumes that the model has been trained via SDDP.train.

source
SDDP.set_numerical_difficulty_callbackFunction
set_numerical_difficulty_callback(
+) where {T}

Read cuts (saved using SDDP.write_cuts_to_file) from filename into model.

Since T can be an arbitrary Julia type, the conversion to JSON is lossy. When reading, read_cuts_from_file only supports T=Int, T=NTuple{N, Int}, and T=Symbol. If you have manually created a policy graph with a different node type T, provide a function node_name_parser with the signature

Keyword arguments

  • node_name_parser(T, name::String)::T where {T} that returns the name of each node given the string name name. If node_name_parser returns nothing, those cuts are skipped.

  • cut_selection::Bool run or not the cut selection algorithm when adding the cuts to the model.

See also SDDP.write_cuts_to_file.

source
SDDP.write_log_to_csvFunction
write_log_to_csv(model::PolicyGraph, filename::String)

Write the log of the most recent training to a csv for post-analysis.

Assumes that the model has been trained via SDDP.train.

source
SDDP.set_numerical_difficulty_callbackFunction
set_numerical_difficulty_callback(
     model::PolicyGraph,
     callback::Function,
 )

Set a callback function callback(::PolicyGraph, ::Node; require_dual::Bool) that is run when the optimizer terminates without finding a primal solution (and dual solution if require_dual is true).

Default callback

The default callback is a small variation of:

function callback(::PolicyGraph, node::Node; require_dual::Bool)
@@ -274,29 +274,29 @@
     end
     return
 end
-SDDP.set_numerical_difficulty_callback(model, callback)
source

Stopping rules

Stopping rules

SDDP.convergence_testFunction
convergence_test(
     model::PolicyGraph,
     log::Vector{Log},
     ::AbstractStoppingRule,
-)::Bool

Return a Bool indicating if the algorithm should terminate the training.

source
SDDP.TimeLimitType
TimeLimit(limit::Float64)

Teriminate the algorithm after limit seconds of computation.

source
SDDP.StatisticalType
Statistical(;
+)::Bool

Return a Bool indicating if the algorithm should terminate the training.

source
SDDP.TimeLimitType
TimeLimit(limit::Float64)

Teriminate the algorithm after limit seconds of computation.

source
SDDP.StatisticalType
Statistical(;
     num_replications::Int,
     iteration_period::Int = 1,
     z_score::Float64 = 1.96,
     verbose::Bool = true,
     disable_warning::Bool = false,
-)

Perform an in-sample Monte Carlo simulation of the policy with num_replications replications every iteration_periods and terminate if the deterministic bound (lower if minimizing) falls into the confidence interval for the mean of the simulated cost.

If verbose = true, print the confidence interval.

If disable_warning = true, disable the warning telling you not to use this stopping rule (see below).

Why this stopping rule is not good

This stopping rule is one of the most common stopping rules seen in the literature. Don't follow the crowd. It is a poor choice for your model, and should be rarely used. Instead, you should use the default stopping rule, or use a fixed limit like a time or iteration limit.

To understand why this stopping rule is a bad idea, assume we have conducted num_replications simulations and the objectives are in a vector objectives::Vector{Float64}.

Our mean is μ = mean(objectives) and the half-width of the confidence interval is w = z_score * std(objectives) / sqrt(num_replications).

Many papers suggest terminating the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) is contained within the confidence interval. That is, if μ - w <= bound <= μ + w. Even worse, some papers define an optimization gap of (μ + w) / bound (if minimizing) or (μ - w) / bound (if maximizing), and they terminate once the gap is less than a value like 1%.

Both of these approaches are misleading, and more often than not, they will result in terminating with a sub-optimal policy that performs worse than expected. There are two main reasons for this:

  1. The half-width depends on the number of replications. To reduce the computational cost, users are often tempted to choose a small number of replications. This increases the half-width and makes it more likely that the algorithm will stop early. But if we choose a large number of replications, then the computational cost is high, and we would have been better off to run a fixed number of iterations and use that computational time to run extra training iterations.
  2. The confidence interval assumes that the simulated values are normally distributed. In infinite horizon models, this is almost never the case. The distribution is usually closer to exponential or log-normal.

There is a third, more technical reason which relates to the conditional dependence of constructing multiple confidence intervals.

The default value of z_score = 1.96 corresponds to a 95% confidence interval. You should interpret the interval as "if we re-run this simulation 100 times, then the true mean will lie in the confidence interval 95 times out of 100." But if the bound is within the confidence interval, then we know the true mean cannot be better than the bound. Therfore, there is a more than 95% chance that the mean is within the interval.

A separate problem arises if we simulate, find that the bound is outside the confidence interval, keep training, and then re-simulate to compute a new confidence interval. Because we will terminate when the bound enters the confidence interval, the repeated construction of a confidence interval means that the unconditional probability that we terminate with a false positive is larger than 5% (there are now more chances that the sample mean is optimistic and that the confidence interval includes the bound but not the true mean). One fix is to simulate with a sequentially increasing number of replicates, so that the unconditional probability stays at 95%, but this runs into the problem of computational cost. For more information on sequential sampling, see, for example, Güzin Bayraksan, David P. Morton, (2011) A Sequential Sampling Procedure for Stochastic Programming. Operations Research 59(4):898-913.

source
SDDP.BoundStallingType
BoundStalling(num_previous_iterations::Int, tolerance::Float64)

Teriminate the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) fails to improve by more than tolerance in absolute terms for more than num_previous_iterations consecutve iterations, provided it has improved relative to the bound after the first iteration.

Checking for an improvement relative to the first iteration avoids early termination in a situation where the bound fails to improve for the first N iterations. This frequently happens in models with a large number of stages, where it takes time for the cuts to propogate backward enough to modify the bound of the root node.

source
SDDP.StoppingChainType
StoppingChain(rules::AbstractStoppingRule...)

Terminate once all of the rules are statified.

This stopping rule short-circuits, so subsequent rules are only tested if the previous pass.

Examples

A stopping rule that runs 100 iterations, then checks for the bound stalling:

StoppingChain(IterationLimit(100), BoundStalling(5, 0.1))
source
SDDP.SimulationStoppingRuleType
SimulationStoppingRule(;
+)

Perform an in-sample Monte Carlo simulation of the policy with num_replications replications every iteration_periods and terminate if the deterministic bound (lower if minimizing) falls into the confidence interval for the mean of the simulated cost.

If verbose = true, print the confidence interval.

If disable_warning = true, disable the warning telling you not to use this stopping rule (see below).

Why this stopping rule is not good

This stopping rule is one of the most common stopping rules seen in the literature. Don't follow the crowd. It is a poor choice for your model, and should be rarely used. Instead, you should use the default stopping rule, or use a fixed limit like a time or iteration limit.

To understand why this stopping rule is a bad idea, assume we have conducted num_replications simulations and the objectives are in a vector objectives::Vector{Float64}.

Our mean is μ = mean(objectives) and the half-width of the confidence interval is w = z_score * std(objectives) / sqrt(num_replications).

Many papers suggest terminating the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) is contained within the confidence interval. That is, if μ - w <= bound <= μ + w. Even worse, some papers define an optimization gap of (μ + w) / bound (if minimizing) or (μ - w) / bound (if maximizing), and they terminate once the gap is less than a value like 1%.

Both of these approaches are misleading, and more often than not, they will result in terminating with a sub-optimal policy that performs worse than expected. There are two main reasons for this:

  1. The half-width depends on the number of replications. To reduce the computational cost, users are often tempted to choose a small number of replications. This increases the half-width and makes it more likely that the algorithm will stop early. But if we choose a large number of replications, then the computational cost is high, and we would have been better off to run a fixed number of iterations and use that computational time to run extra training iterations.
  2. The confidence interval assumes that the simulated values are normally distributed. In infinite horizon models, this is almost never the case. The distribution is usually closer to exponential or log-normal.

There is a third, more technical reason which relates to the conditional dependence of constructing multiple confidence intervals.

The default value of z_score = 1.96 corresponds to a 95% confidence interval. You should interpret the interval as "if we re-run this simulation 100 times, then the true mean will lie in the confidence interval 95 times out of 100." But if the bound is within the confidence interval, then we know the true mean cannot be better than the bound. Therfore, there is a more than 95% chance that the mean is within the interval.

A separate problem arises if we simulate, find that the bound is outside the confidence interval, keep training, and then re-simulate to compute a new confidence interval. Because we will terminate when the bound enters the confidence interval, the repeated construction of a confidence interval means that the unconditional probability that we terminate with a false positive is larger than 5% (there are now more chances that the sample mean is optimistic and that the confidence interval includes the bound but not the true mean). One fix is to simulate with a sequentially increasing number of replicates, so that the unconditional probability stays at 95%, but this runs into the problem of computational cost. For more information on sequential sampling, see, for example, Güzin Bayraksan, David P. Morton, (2011) A Sequential Sampling Procedure for Stochastic Programming. Operations Research 59(4):898-913.

source
SDDP.BoundStallingType
BoundStalling(num_previous_iterations::Int, tolerance::Float64)

Teriminate the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) fails to improve by more than tolerance in absolute terms for more than num_previous_iterations consecutve iterations, provided it has improved relative to the bound after the first iteration.

Checking for an improvement relative to the first iteration avoids early termination in a situation where the bound fails to improve for the first N iterations. This frequently happens in models with a large number of stages, where it takes time for the cuts to propogate backward enough to modify the bound of the root node.

source
SDDP.StoppingChainType
StoppingChain(rules::AbstractStoppingRule...)

Terminate once all of the rules are statified.

This stopping rule short-circuits, so subsequent rules are only tested if the previous pass.

Examples

A stopping rule that runs 100 iterations, then checks for the bound stalling:

StoppingChain(IterationLimit(100), BoundStalling(5, 0.1))
source
SDDP.SimulationStoppingRuleType
SimulationStoppingRule(;
     sampling_scheme::AbstractSamplingScheme = SDDP.InSampleMonteCarlo(),
     replications::Int = -1,
     period::Int = -1,
     distance_tol::Float64 = 1e-2,
     bound_tol::Float64 = 1e-4,
-)

Terminate the algorithm using a mix of heuristics. Unless you know otherwise, this is typically a good default.

Termination criteria

First, we check that the deterministic bound has stabilized. That is, over the last five iterations, the deterministic bound has changed by less than an absolute or relative tolerance of bound_tol.

Then, if we have not done one in the last period iterations, we perform a primal simulation of the policy using replications out-of-sample realizations from sampling_scheme. The realizations are stored and re-used in each simulation. From each simulation, we record the value of the stage objective. We terminate the policy if each of the trajectories in two consecutive simulations differ by less than distance_tol.

By default, replications and period are -1, and SDDP.jl will guess good values for these. Over-ride the default behavior by setting an appropriate value.

Example

SDDP.train(model; stopping_rules = [SimulationStoppingRule()])
source
SDDP.FirstStageStoppingRuleType
FirstStageStoppingRule(; atol::Float64 = 1e-3, iterations::Int = 50)

Terminate the algorithm when the outgoing values of the first-stage state variables have not changed by more than atol for iterations number of consecutive iterations.

Example

SDDP.train(model; stopping_rules = [FirstStageStoppingRule()])
source

Sampling schemes

SDDP.sample_scenarioFunction
sample_scenario(graph::PolicyGraph{T}, ::AbstractSamplingScheme) where {T}

Sample a scenario from the policy graph graph based on the sampling scheme.

Returns ::Tuple{Vector{Tuple{T, <:Any}}, Bool}, where the first element is the scenario, and the second element is a Boolean flag indicating if the scenario was terminated due to the detection of a cycle.

The scenario is a list of tuples (type Vector{Tuple{T, <:Any}}) where the first component of each tuple is the index of the node, and the second component is the stagewise-independent noise term observed in that node.

source
SDDP.InSampleMonteCarloType
InSampleMonteCarlo(;
+)

Terminate the algorithm using a mix of heuristics. Unless you know otherwise, this is typically a good default.

Termination criteria

First, we check that the deterministic bound has stabilized. That is, over the last five iterations, the deterministic bound has changed by less than an absolute or relative tolerance of bound_tol.

Then, if we have not done one in the last period iterations, we perform a primal simulation of the policy using replications out-of-sample realizations from sampling_scheme. The realizations are stored and re-used in each simulation. From each simulation, we record the value of the stage objective. We terminate the policy if each of the trajectories in two consecutive simulations differ by less than distance_tol.

By default, replications and period are -1, and SDDP.jl will guess good values for these. Over-ride the default behavior by setting an appropriate value.

Example

SDDP.train(model; stopping_rules = [SimulationStoppingRule()])
source
SDDP.FirstStageStoppingRuleType
FirstStageStoppingRule(; atol::Float64 = 1e-3, iterations::Int = 50)

Terminate the algorithm when the outgoing values of the first-stage state variables have not changed by more than atol for iterations number of consecutive iterations.

Example

SDDP.train(model; stopping_rules = [FirstStageStoppingRule()])
source

Sampling schemes

SDDP.sample_scenarioFunction
sample_scenario(graph::PolicyGraph{T}, ::AbstractSamplingScheme) where {T}

Sample a scenario from the policy graph graph based on the sampling scheme.

Returns ::Tuple{Vector{Tuple{T, <:Any}}, Bool}, where the first element is the scenario, and the second element is a Boolean flag indicating if the scenario was terminated due to the detection of a cycle.

The scenario is a list of tuples (type Vector{Tuple{T, <:Any}}) where the first component of each tuple is the index of the node, and the second component is the stagewise-independent noise term observed in that node.

source
SDDP.InSampleMonteCarloType
InSampleMonteCarlo(;
     max_depth::Int = 0,
     terminate_on_cycle::Function = false,
     terminate_on_dummy_leaf::Function = true,
     rollout_limit::Function = (i::Int) -> typemax(Int),
     initial_node::Any = nothing,
-)

A Monte Carlo sampling scheme using the in-sample data from the policy graph definition.

If terminate_on_cycle, terminate the forward pass once a cycle is detected. If max_depth > 0, return once max_depth nodes have been sampled. If terminate_on_dummy_leaf, terminate the forward pass with 1 - probability of sampling a child node.

Note that if terminate_on_cycle = false and terminate_on_dummy_leaf = false then max_depth must be set > 0.

Control which node the trajectories start from using initial_node. If it is left as nothing, the root node is used as the starting node.

You can use rollout_limit to set iteration specific depth limits. For example:

InSampleMonteCarlo(rollout_limit = i -> 2 * i)
source
SDDP.OutOfSampleMonteCarloType
OutOfSampleMonteCarlo(
+)

A Monte Carlo sampling scheme using the in-sample data from the policy graph definition.

If terminate_on_cycle, terminate the forward pass once a cycle is detected. If max_depth > 0, return once max_depth nodes have been sampled. If terminate_on_dummy_leaf, terminate the forward pass with 1 - probability of sampling a child node.

Note that if terminate_on_cycle = false and terminate_on_dummy_leaf = false then max_depth must be set > 0.

Control which node the trajectories start from using initial_node. If it is left as nothing, the root node is used as the starting node.

You can use rollout_limit to set iteration specific depth limits. For example:

InSampleMonteCarlo(rollout_limit = i -> 2 * i)
source
SDDP.OutOfSampleMonteCarloType
OutOfSampleMonteCarlo(
     f::Function,
     graph::PolicyGraph;
     use_insample_transition::Bool = false,
@@ -315,7 +315,7 @@
     end
 end

Given linear policy graph graph with T stages:

sampler = OutOfSampleMonteCarlo(graph, use_insample_transition=true) do node
     return [SDDP.Noise(node, 0.3), SDDP.Noise(node + 1, 0.7)]
-end
source
SDDP.HistoricalType
Historical(
     scenarios::Vector{Vector{Tuple{T,S}}},
     probability::Vector{Float64};
     terminate_on_cycle::Bool = false,
@@ -326,17 +326,17 @@
         [(1, 1.0), (2, 0.0), (3, 0.0)]
     ],
     [0.2, 0.5, 0.3],
-)
source
Historical(
+)
source
Historical(
     scenarios::Vector{Vector{Tuple{T,S}}};
     terminate_on_cycle::Bool = false,
 ) where {T,S}

A deterministic sampling scheme that iterates through the vector of provided scenarios.

Examples

Historical([
     [(1, 0.5), (2, 1.0), (3, 0.5)],
     [(1, 0.5), (2, 0.0), (3, 1.0)],
     [(1, 1.0), (2, 0.0), (3, 0.0)],
-])
source
Historical(
+])
source
Historical(
     scenario::Vector{Tuple{T,S}};
     terminate_on_cycle::Bool = false,
-) where {T,S}

A deterministic sampling scheme that always samples scenario.

Examples

Historical([(1, 0.5), (2, 1.5), (3, 0.75)])
source
SDDP.PSRSamplingSchemeType
PSRSamplingScheme(N::Int; sampling_scheme = InSampleMonteCarlo())

A sampling scheme with N scenarios, similar to how PSR does it.

source
SDDP.SimulatorSamplingSchemeType
SimulatorSamplingScheme(simulator::Function)

Create a sampling scheme based on a univariate scenario generator simulator, which returns a Vector{Float64} when called with no arguments like simulator().

This sampling scheme must be used with a Markovian graph constructed from the same simulator.

The sample space for SDDP.parameterize must be a tuple with 1 or 2 values, value is the Markov state and the second value is the random variable for the current node. If the node is deterministic, use Ω = [(markov_state,)].

This sampling scheme generates a new scenario by calling simulator(), and then picking the sequence of nodes in the Markovian graph that is closest to the new trajectory.

Example

julia> using SDDP
+) where {T,S}

A deterministic sampling scheme that always samples scenario.

Examples

Historical([(1, 0.5), (2, 1.5), (3, 0.75)])
source
SDDP.PSRSamplingSchemeType
PSRSamplingScheme(N::Int; sampling_scheme = InSampleMonteCarlo())

A sampling scheme with N scenarios, similar to how PSR does it.

source
SDDP.SimulatorSamplingSchemeType
SimulatorSamplingScheme(simulator::Function)

Create a sampling scheme based on a univariate scenario generator simulator, which returns a Vector{Float64} when called with no arguments like simulator().

This sampling scheme must be used with a Markovian graph constructed from the same simulator.

The sample space for SDDP.parameterize must be a tuple with 1 or 2 values, value is the Markov state and the second value is the random variable for the current node. If the node is deterministic, use Ω = [(markov_state,)].

This sampling scheme generates a new scenario by calling simulator(), and then picking the sequence of nodes in the Markovian graph that is closest to the new trajectory.

Example

julia> using SDDP
 
 julia> import HiGHS
 
@@ -368,50 +368,50 @@
            iteration_limit = 10,
            sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),
        )
-
source

Parallel schemes

SDDP.ThreadedType
Threaded()

Run SDDP in multi-threaded mode.

Use julia --threads N to start Julia with N threads. In most cases, you should pick N to be the number of physical cores on your machine.

Danger

This plug-in is experimental, and parts of SDDP.jl may not be threadsafe. If you encounter any problems or crashes, please open a GitHub issue.

Example

SDDP.train(model; parallel_scheme = SDDP.Threaded())
-SDDP.simulate(model; parallel_scheme = SDDP.Threaded())
source

Parallel schemes

SDDP.ThreadedType
Threaded()

Run SDDP in multi-threaded mode.

Use julia --threads N to start Julia with N threads. In most cases, you should pick N to be the number of physical cores on your machine.

Danger

This plug-in is experimental, and parts of SDDP.jl may not be threadsafe. If you encounter any problems or crashes, please open a GitHub issue.

Example

SDDP.train(model; parallel_scheme = SDDP.Threaded())
+SDDP.simulate(model; parallel_scheme = SDDP.Threaded())
source
SDDP.AsynchronousType
Asynchronous(
     [init_callback::Function,]
     slave_pids::Vector{Int} = workers();
     use_master::Bool = true,
-)

Run SDDP in asynchronous mode workers with pid's slave_pids.

After initializing the models on each worker, call init_callback(model). Note that init_callback is run locally on the worker and not on the master thread.

If use_master is true, iterations are also conducted on the master process.

source
Asynchronous(
+)

Run SDDP in asynchronous mode workers with pid's slave_pids.

After initializing the models on each worker, call init_callback(model). Note that init_callback is run locally on the worker and not on the master thread.

If use_master is true, iterations are also conducted on the master process.

source
Asynchronous(
     solver::Any,
     slave_pids::Vector{Int} = workers();
     use_master::Bool = true,
-)

Run SDDP in asynchronous mode workers with pid's slave_pids.

Set the optimizer on each worker by calling JuMP.set_optimizer(model, solver).

source

Forward passes

SDDP.DefaultForwardPassType
DefaultForwardPass(; include_last_node::Bool = true)

The default forward pass.

If include_last_node = false and the sample terminated due to a cycle, then the last node (which forms the cycle) is omitted. This can be useful option to set when training, but it comes at the cost of not knowing which node formed the cycle (if there are multiple possibilities).

source
SDDP.RevisitingForwardPassType
RevisitingForwardPass(
+)

Run SDDP in asynchronous mode workers with pid's slave_pids.

Set the optimizer on each worker by calling JuMP.set_optimizer(model, solver).

source

Forward passes

SDDP.DefaultForwardPassType
DefaultForwardPass(; include_last_node::Bool = true)

The default forward pass.

If include_last_node = false and the sample terminated due to a cycle, then the last node (which forms the cycle) is omitted. This can be useful option to set when training, but it comes at the cost of not knowing which node formed the cycle (if there are multiple possibilities).

source
SDDP.RevisitingForwardPassType
RevisitingForwardPass(
     period::Int = 500;
     sub_pass::AbstractForwardPass = DefaultForwardPass(),
-)

A forward pass scheme that generate period new forward passes (using sub_pass), then revisits all previously explored forward passes. This can be useful to encourage convergence at a diversity of points in the state-space.

Set period = typemax(Int) to disable.

For example, if period = 2, then the forward passes will be revisited as follows: 1, 2, 1, 2, 3, 4, 1, 2, 3, 4, 5, 6, 1, 2, ....

source
SDDP.RiskAdjustedForwardPassType
RiskAdjustedForwardPass(;
+)

A forward pass scheme that generate period new forward passes (using sub_pass), then revisits all previously explored forward passes. This can be useful to encourage convergence at a diversity of points in the state-space.

Set period = typemax(Int) to disable.

For example, if period = 2, then the forward passes will be revisited as follows: 1, 2, 1, 2, 3, 4, 1, 2, 3, 4, 5, 6, 1, 2, ....

source
SDDP.RiskAdjustedForwardPassType
RiskAdjustedForwardPass(;
     forward_pass::AbstractForwardPass,
     risk_measure::AbstractRiskMeasure,
     resampling_probability::Float64,
     rejection_count::Int = 5,
-)

A forward pass that resamples a previous forward pass with resampling_probability probability, and otherwise samples a new forward pass using forward_pass.

The forward pass to revisit is chosen based on the risk-adjusted (using risk_measure) probability of the cumulative stage objectives.

Note that this objective corresponds to the first time we visited the trajectory. Subsequent visits may have improved things, but we don't have the mechanisms in-place to update it. Therefore, remove the forward pass from resampling consideration after rejection_count revisits.

source
SDDP.AlternativeForwardPassType
AlternativeForwardPass(
+)

A forward pass that resamples a previous forward pass with resampling_probability probability, and otherwise samples a new forward pass using forward_pass.

The forward pass to revisit is chosen based on the risk-adjusted (using risk_measure) probability of the cumulative stage objectives.

Note that this objective corresponds to the first time we visited the trajectory. Subsequent visits may have improved things, but we don't have the mechanisms in-place to update it. Therefore, remove the forward pass from resampling consideration after rejection_count revisits.

source
SDDP.AlternativeForwardPassType
AlternativeForwardPass(
     forward_model::SDDP.PolicyGraph{T};
     forward_pass::AbstractForwardPass = DefaultForwardPass(),
-)

A forward pass that simulates using forward_model, which may be different to the model used in the backwards pass.

When using this forward pass, you should almost always pass SDDP.AlternativePostIterationCallback to the post_iteration_callback argument of SDDP.train.

This forward pass is most useful when the forward_model is non-convex and we use a convex approximation of the model in the backward pass.

For example, in optimal power flow models, we can use an AC-OPF formulation as the forward_model and a DC-OPF formulation as the backward model.

For more details see the paper:

Rosemberg, A., and Street, A., and Garcia, J.D., and Valladão, D.M., and Silva, T., and Dowson, O. (2021). Assessing the cost of network simplifications in long-term hydrothermal dispatch planning models. IEEE Transactions on Sustainable Energy. 13(1), 196-206.

source
SDDP.RegularizedForwardPassType
RegularizedForwardPass(;
+)

A forward pass that simulates using forward_model, which may be different to the model used in the backwards pass.

When using this forward pass, you should almost always pass SDDP.AlternativePostIterationCallback to the post_iteration_callback argument of SDDP.train.

This forward pass is most useful when the forward_model is non-convex and we use a convex approximation of the model in the backward pass.

For example, in optimal power flow models, we can use an AC-OPF formulation as the forward_model and a DC-OPF formulation as the backward model.

For more details see the paper:

Rosemberg, A., and Street, A., and Garcia, J.D., and Valladão, D.M., and Silva, T., and Dowson, O. (2021). Assessing the cost of network simplifications in long-term hydrothermal dispatch planning models. IEEE Transactions on Sustainable Energy. 13(1), 196-206.

source
SDDP.RegularizedForwardPassType
RegularizedForwardPass(;
     rho::Float64 = 0.05,
     forward_pass::AbstractForwardPass = DefaultForwardPass(),
-)

A forward pass that regularizes the outgoing first-stage state variables with an L-infty trust-region constraint about the previous iteration's solution. Specifically, the bounds of the outgoing state variable x are updated from (l, u) to max(l, x^k - rho * (u - l)) <= x <= min(u, x^k + rho * (u - l)), where x^k is the optimal solution of x in the previous iteration. On the first iteration, the value of the state at the root node is used.

By default, rho is set to 5%, which seems to work well empirically.

Pass a different forward_pass to control the forward pass within the regularized forward pass.

This forward pass is largely intended to be used for investment problems in which the first stage makes a series of capacity decisions that then influence the rest of the graph. An error is thrown if the first stage problem is not deterministic, and states are silently skipped if they do not have finite bounds.

source

Risk Measures

SDDP.adjust_probabilityFunction
adjust_probability(
+)

A forward pass that regularizes the outgoing first-stage state variables with an L-infty trust-region constraint about the previous iteration's solution. Specifically, the bounds of the outgoing state variable x are updated from (l, u) to max(l, x^k - rho * (u - l)) <= x <= min(u, x^k + rho * (u - l)), where x^k is the optimal solution of x in the previous iteration. On the first iteration, the value of the state at the root node is used.

By default, rho is set to 5%, which seems to work well empirically.

Pass a different forward_pass to control the forward pass within the regularized forward pass.

This forward pass is largely intended to be used for investment problems in which the first stage makes a series of capacity decisions that then influence the rest of the graph. An error is thrown if the first stage problem is not deterministic, and states are silently skipped if they do not have finite bounds.

source

Risk Measures

SDDP.adjust_probabilityFunction
adjust_probability(
     measure::Expectation
     risk_adjusted_probability::Vector{Float64},
     original_probability::Vector{Float64},
     noise_support::Vector{Noise{T}},
     objective_realizations::Vector{Float64},
     is_minimization::Bool,
-) where {T}
source

Duality handlers

SDDP.ContinuousConicDualityType
ContinuousConicDuality()

Compute dual variables in the backward pass using conic duality, relaxing any binary or integer restrictions as necessary.

Theory

Given the problem

min Cᵢ(x̄, u, w) + θᵢ
+) where {T}
source

Duality handlers

SDDP.ContinuousConicDualityType
ContinuousConicDuality()

Compute dual variables in the backward pass using conic duality, relaxing any binary or integer restrictions as necessary.

Theory

Given the problem

min Cᵢ(x̄, u, w) + θᵢ
  st (x̄, x′, u) in Xᵢ(w) ∩ S
     x̄ - x == 0          [λ]

where S ⊆ ℝ×ℤ, we relax integrality and using conic duality to solve for λ in the problem:

min Cᵢ(x̄, u, w) + θᵢ
  st (x̄, x′, u) in Xᵢ(w)
-    x̄ - x == 0          [λ]
source
SDDP.LagrangianDualityType
LagrangianDuality(;
     method::LocalImprovementSearch.AbstractSearchMethod =
         LocalImprovementSearch.BFGS(100),
 )

Obtain dual variables in the backward pass using Lagrangian duality.

Arguments

  • method: the LocalImprovementSearch method for maximizing the Lagrangian dual problem.

Theory

Given the problem

min Cᵢ(x̄, u, w) + θᵢ
  st (x̄, x′, u) in Xᵢ(w) ∩ S
     x̄ - x == 0          [λ]

where S ⊆ ℝ×ℤ, we solve the problem max L(λ), where:

L(λ) = min Cᵢ(x̄, u, w) + θᵢ - λ' h(x̄)
-        st (x̄, x′, u) in Xᵢ(w) ∩ S

and where h(x̄) = x̄ - x.

source
SDDP.StrengthenedConicDualityType
StrengthenedConicDuality()

Obtain dual variables in the backward pass using strengthened conic duality.

Theory

Given the problem

min Cᵢ(x̄, u, w) + θᵢ
+        st (x̄, x′, u) in Xᵢ(w) ∩ S

and where h(x̄) = x̄ - x.

source
SDDP.StrengthenedConicDualityType
StrengthenedConicDuality()

Obtain dual variables in the backward pass using strengthened conic duality.

Theory

Given the problem

min Cᵢ(x̄, u, w) + θᵢ
  st (x̄, x′, u) in Xᵢ(w) ∩ S
     x̄ - x == 0          [λ]

we first obtain an estimate for λ using ContinuousConicDuality.

Then, we evaluate the Lagrangian function:

L(λ) = min Cᵢ(x̄, u, w) + θᵢ - λ' (x̄ - x`)
-        st (x̄, x′, u) in Xᵢ(w) ∩ S

to obtain a better estimate of the intercept.

source
SDDP.BanditDualityType
BanditDuality()

Formulates the problem of choosing a duality handler as a multi-armed bandit problem. The arms to choose between are:

Our problem isn't a typical multi-armed bandit for a two reasons:

  1. The reward distribution is non-stationary (each arm converges to 0 as it keeps getting pulled.
  2. The distribution of rewards is dependent on the history of the arms that were chosen.

We choose a very simple heuristic: pick the arm with the best mean + 1 standard deviation. That should ensure we consistently pick the arm with the best likelihood of improving the value function.

In future, we should consider discounting the rewards of earlier iterations, and focus more on the more-recent rewards.

source

Simulating the policy

SDDP.simulateFunction
simulate(
+        st (x̄, x′, u) in Xᵢ(w) ∩ S

to obtain a better estimate of the intercept.

source
SDDP.BanditDualityType
BanditDuality()

Formulates the problem of choosing a duality handler as a multi-armed bandit problem. The arms to choose between are:

Our problem isn't a typical multi-armed bandit for a two reasons:

  1. The reward distribution is non-stationary (each arm converges to 0 as it keeps getting pulled.
  2. The distribution of rewards is dependent on the history of the arms that were chosen.

We choose a very simple heuristic: pick the arm with the best mean + 1 standard deviation. That should ensure we consistently pick the arm with the best likelihood of improving the value function.

In future, we should consider discounting the rewards of earlier iterations, and focus more on the more-recent rewards.

source

Simulating the policy

SDDP.simulateFunction
simulate(
     model::PolicyGraph,
     number_replications::Int = 1,
     variables::Vector{Symbol} = Symbol[];
@@ -426,65 +426,65 @@
     custom_recorders = Dict{Symbol, Function}(
         :constraint_dual => sp -> JuMP.dual(sp[:my_constraint])
     )
-)

The value of the dual in the first stage of the second replication can be accessed as:

simulation_results[2][1][:constraint_dual]
source
SDDP.calculate_boundFunction
SDDP.calculate_bound(
+)

The value of the dual in the first stage of the second replication can be accessed as:

simulation_results[2][1][:constraint_dual]
source
SDDP.calculate_boundFunction
SDDP.calculate_bound(
     model::PolicyGraph,
     state::Dict{Symbol,Float64} = model.initial_root_state;
     risk_measure::AbstractRiskMeasure = Expectation(),
-)

Calculate the lower bound (if minimizing, otherwise upper bound) of the problem model at the point state, assuming the risk measure at the root node is risk_measure.

source
SDDP.add_all_cutsFunction
add_all_cuts(model::PolicyGraph)

Add all cuts that may have been deleted back into the model.

Explanation

During the solve, SDDP.jl may decide to remove cuts for a variety of reasons.

These can include cuts that define the optimal value function, particularly around the extremes of the state-space (e.g., reservoirs empty).

This function ensures that all cuts discovered are added back into the model.

You should call this after train and before simulate.

source

Decision rules

SDDP.DecisionRuleType
DecisionRule(model::PolicyGraph{T}; node::T)

Create a decision rule for node node in model.

Example

rule = SDDP.DecisionRule(model; node = 1)
source
SDDP.evaluateFunction
evaluate(
+)

Calculate the lower bound (if minimizing, otherwise upper bound) of the problem model at the point state, assuming the risk measure at the root node is risk_measure.

source
SDDP.add_all_cutsFunction
add_all_cuts(model::PolicyGraph)

Add all cuts that may have been deleted back into the model.

Explanation

During the solve, SDDP.jl may decide to remove cuts for a variety of reasons.

These can include cuts that define the optimal value function, particularly around the extremes of the state-space (e.g., reservoirs empty).

This function ensures that all cuts discovered are added back into the model.

You should call this after train and before simulate.

source

Decision rules

SDDP.DecisionRuleType
DecisionRule(model::PolicyGraph{T}; node::T)

Create a decision rule for node node in model.

Example

rule = SDDP.DecisionRule(model; node = 1)
source
SDDP.evaluateFunction
evaluate(
     rule::DecisionRule;
     incoming_state::Dict{Symbol,Float64},
     noise = nothing,
     controls_to_record = Symbol[],
-)

Evalute the decision rule rule at the point described by the incoming_state and noise.

If the node is deterministic, omit the noise argument.

Pass a list of symbols to controls_to_record to save the optimal primal solution corresponding to the names registered in the model.

source
evaluate(
+)

Evalute the decision rule rule at the point described by the incoming_state and noise.

If the node is deterministic, omit the noise argument.

Pass a list of symbols to controls_to_record to save the optimal primal solution corresponding to the names registered in the model.

source
evaluate(
     V::ValueFunction,
     point::Dict{Union{Symbol,String},<:Real}
     objective_state = nothing,
     belief_state = nothing
-)

Evaluate the value function V at point in the state-space.

Returns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.

Examples

evaluate(V, Dict(:volume => 1.0))

If the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:

evaluate(V, Dict(Symbol("volume[1]") => 1.0))

You can also use strings or symbols for the keys.

evaluate(V, Dict("volume[1]" => 1))
source
evalute(V::ValueFunction{Nothing, Nothing}; kwargs...)

Evalute the value function V at the point in the state-space specified by kwargs.

Examples

evaluate(V; volume = 1)
source
evaluate(
+)

Evaluate the value function V at point in the state-space.

Returns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.

Examples

evaluate(V, Dict(:volume => 1.0))

If the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:

evaluate(V, Dict(Symbol("volume[1]") => 1.0))

You can also use strings or symbols for the keys.

evaluate(V, Dict("volume[1]" => 1))
source
evalute(V::ValueFunction{Nothing, Nothing}; kwargs...)

Evalute the value function V at the point in the state-space specified by kwargs.

Examples

evaluate(V; volume = 1)
source
evaluate(
     model::PolicyGraph{T},
     validation_scenarios::ValidationScenarios{T,S},
 ) where {T,S}

Evaluate the performance of the policy contained in model after a call to train on the scenarios specified by validation_scenarios.

Examples

model, validation_scenarios = read_from_file("my_model.sof.json")
 train(model; iteration_limit = 100)
-simulations = evaluate(model, validation_scenarios)
source

Visualizing the policy

SDDP.SpaghettiPlotType
SDDP.SpaghettiPlot(; stages, scenarios)

Initialize a new SpaghettiPlot with stages stages and scenarios number of replications.

source
SDDP.add_spaghettiFunction
SDDP.add_spaghetti(data_function::Function, plt::SpaghettiPlot; kwargs...)

Description

Add a new figure to the SpaghettiPlot plt, where the y-value of the scenarioth line when x = stage is given by data_function(plt.simulations[scenario][stage]).

Keyword arguments

  • xlabel: set the xaxis label
  • ylabel: set the yaxis label
  • title: set the title of the plot
  • ymin: set the minimum y value
  • ymax: set the maximum y value
  • cumulative: plot the additive accumulation of the value across the stages
  • interpolate: interpolation method for lines between stages.

Defaults to "linear" see the d3 docs for all options.

Examples

simulations = simulate(model, 10)
+simulations = evaluate(model, validation_scenarios)
source

Visualizing the policy

SDDP.SpaghettiPlotType
SDDP.SpaghettiPlot(; stages, scenarios)

Initialize a new SpaghettiPlot with stages stages and scenarios number of replications.

source
SDDP.add_spaghettiFunction
SDDP.add_spaghetti(data_function::Function, plt::SpaghettiPlot; kwargs...)

Description

Add a new figure to the SpaghettiPlot plt, where the y-value of the scenarioth line when x = stage is given by data_function(plt.simulations[scenario][stage]).

Keyword arguments

  • xlabel: set the xaxis label
  • ylabel: set the yaxis label
  • title: set the title of the plot
  • ymin: set the minimum y value
  • ymax: set the maximum y value
  • cumulative: plot the additive accumulation of the value across the stages
  • interpolate: interpolation method for lines between stages.

Defaults to "linear" see the d3 docs for all options.

Examples

simulations = simulate(model, 10)
 plt = SDDP.spaghetti_plot(simulations)
 SDDP.add_spaghetti(plt; title = "Stage objective") do data
     return data[:stage_objective]
-end
source
SDDP.publication_plotFunction
SDDP.publication_plot(
     data_function, simulations;
     quantile = [0.0, 0.1, 0.25, 0.5, 0.75, 0.9, 1.0],
     kwargs...)

Create a Plots.jl recipe plot of the simulations.

See Plots.jl for the list of keyword arguments.

Examples

SDDP.publication_plot(simulations; title = "My title") do data
     return data[:stage_objective]
-end
source
SDDP.ValueFunctionType
ValueFunction

A representation of the value function. SDDP.jl uses the following unique representation of the value function that is undocumented in the literature.

It supports three types of state variables:

  1. x - convex "resource" states
  2. b - concave "belief" states
  3. y - concave "objective" states

In addition, we have three types of cuts:

  1. Single-cuts (also called "average" cuts in the literature), which involve the risk-adjusted expectation of the cost-to-go.
  2. Multi-cuts, which use a different cost-to-go term for each realization w.
  3. Risk-cuts, which correspond to the facets of the dual interpretation of a coherent risk measure.

Therefore, ValueFunction returns a JuMP model of the following form:

V(x, b, y) = min: μᵀb + νᵀy + θ
+end
source
SDDP.ValueFunctionType
ValueFunction

A representation of the value function. SDDP.jl uses the following unique representation of the value function that is undocumented in the literature.

It supports three types of state variables:

  1. x - convex "resource" states
  2. b - concave "belief" states
  3. y - concave "objective" states

In addition, we have three types of cuts:

  1. Single-cuts (also called "average" cuts in the literature), which involve the risk-adjusted expectation of the cost-to-go.
  2. Multi-cuts, which use a different cost-to-go term for each realization w.
  3. Risk-cuts, which correspond to the facets of the dual interpretation of a coherent risk measure.

Therefore, ValueFunction returns a JuMP model of the following form:

V(x, b, y) = min: μᵀb + νᵀy + θ
              s.t. # "Single" / "Average" cuts
                   μᵀb(j) + νᵀy(j) + θ >= α(j) + xᵀβ(j), ∀ j ∈ J
                   # "Multi" cuts
                   μᵀb(k) + νᵀy(k) + φ(w) >= α(k, w) + xᵀβ(k, w), ∀w ∈ Ω, k ∈ K
                   # "Risk-set" cuts
-                  θ ≥ Σ{p(k, w) * φ(w)}_w - μᵀb(k) - νᵀy(k), ∀ k ∈ K
source
SDDP.evaluateMethod
evaluate(
+                  θ ≥ Σ{p(k, w) * φ(w)}_w - μᵀb(k) - νᵀy(k), ∀ k ∈ K
source
SDDP.evaluateMethod
evaluate(
     V::ValueFunction,
     point::Dict{Union{Symbol,String},<:Real}
     objective_state = nothing,
     belief_state = nothing
-)

Evaluate the value function V at point in the state-space.

Returns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.

Examples

evaluate(V, Dict(:volume => 1.0))

If the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:

evaluate(V, Dict(Symbol("volume[1]") => 1.0))

You can also use strings or symbols for the keys.

evaluate(V, Dict("volume[1]" => 1))
source
SDDP.plotFunction
plot(plt::SpaghettiPlot[, filename::String]; open::Bool = true)

The SpaghettiPlot plot plt to filename. If filename is not given, it will be saved to a temporary directory. If open = true, then a browser window will be opened to display the resulting HTML file.

source

Debugging the model

SDDP.write_subproblem_to_fileFunction
write_subproblem_to_file(
+)

Evaluate the value function V at point in the state-space.

Returns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.

Examples

evaluate(V, Dict(:volume => 1.0))

If the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:

evaluate(V, Dict(Symbol("volume[1]") => 1.0))

You can also use strings or symbols for the keys.

evaluate(V, Dict("volume[1]" => 1))
source
SDDP.plotFunction
plot(plt::SpaghettiPlot[, filename::String]; open::Bool = true)

The SpaghettiPlot plot plt to filename. If filename is not given, it will be saved to a temporary directory. If open = true, then a browser window will be opened to display the resulting HTML file.

source

Debugging the model

SDDP.write_subproblem_to_fileFunction
write_subproblem_to_file(
     node::Node,
     filename::String;
     throw_error::Bool = false,
-)

Write the subproblem contained in node to the file filename.

The throw_error is an argument used internally by SDDP.jl. If set, an error will be thrown.

Example

SDDP.write_subproblem_to_file(model[1], "subproblem_1.lp")
source
SDDP.deterministic_equivalentFunction
deterministic_equivalent(
+)

Write the subproblem contained in node to the file filename.

The throw_error is an argument used internally by SDDP.jl. If set, an error will be thrown.

Example

SDDP.write_subproblem_to_file(model[1], "subproblem_1.lp")
source
SDDP.deterministic_equivalentFunction
deterministic_equivalent(
     pg::PolicyGraph{T},
     optimizer = nothing;
     time_limit::Union{Real,Nothing} = 60.0,
-)

Form a JuMP model that represents the deterministic equivalent of the problem.

Examples

deterministic_equivalent(model)
deterministic_equivalent(model, HiGHS.Optimizer)
source

StochOptFormat

SDDP.write_to_fileFunction
write_to_file(
+)

Form a JuMP model that represents the deterministic equivalent of the problem.

Examples

deterministic_equivalent(model)
deterministic_equivalent(model, HiGHS.Optimizer)
source

StochOptFormat

SDDP.write_to_fileFunction
write_to_file(
     model::PolicyGraph,
     filename::String;
     compression::MOI.FileFormats.AbstractCompressionScheme =
         MOI.FileFormats.AutomaticCompression(),
     kwargs...
-)

Write model to filename in the StochOptFormat file format.

Pass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.

See Base.write(::IO, ::PolicyGraph) for information on the keyword arguments that can be provided.

Warning

This function is experimental. See the full warning in Base.write(::IO, ::PolicyGraph).

Examples

write_to_file(model, "my_model.sof.json"; validation_scenarios = 10)
source
SDDP.read_from_fileFunction
read_from_file(
+)

Write model to filename in the StochOptFormat file format.

Pass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.

See Base.write(::IO, ::PolicyGraph) for information on the keyword arguments that can be provided.

Warning

This function is experimental. See the full warning in Base.write(::IO, ::PolicyGraph).

Examples

write_to_file(model, "my_model.sof.json"; validation_scenarios = 10)
source
SDDP.read_from_fileFunction
read_from_file(
     filename::String;
     compression::MOI.FileFormats.AbstractCompressionScheme =
         MOI.FileFormats.AutomaticCompression(),
     kwargs...
-)::Tuple{PolicyGraph, ValidationScenarios}

Return a tuple containing a PolicyGraph object and a ValidationScenarios read from filename in the StochOptFormat file format.

Pass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.

See Base.read(::IO, ::Type{PolicyGraph}) for information on the keyword arguments that can be provided.

Warning

This function is experimental. See the full warning in Base.read(::IO, ::Type{PolicyGraph}).

Examples

model, validation_scenarios = read_from_file("my_model.sof.json")
source
Base.writeMethod
Base.write(
+)::Tuple{PolicyGraph, ValidationScenarios}

Return a tuple containing a PolicyGraph object and a ValidationScenarios read from filename in the StochOptFormat file format.

Pass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.

See Base.read(::IO, ::Type{PolicyGraph}) for information on the keyword arguments that can be provided.

Warning

This function is experimental. See the full warning in Base.read(::IO, ::Type{PolicyGraph}).

Examples

model, validation_scenarios = read_from_file("my_model.sof.json")
source
Base.writeMethod
Base.write(
     io::IO,
     model::PolicyGraph;
     validation_scenarios::Union{Nothing,Int,ValidationScenarios} = nothing,
@@ -500,15 +500,15 @@
         date = "2020-07-20",
         description = "Example problem for the SDDP.jl documentation",
     )
-end
source
Base.readMethod
Base.read(
     io::IO,
     ::Type{PolicyGraph};
     bound::Float64 = 1e6,
 )::Tuple{PolicyGraph,ValidationScenarios}

Return a tuple containing a PolicyGraph object and a ValidationScenarios read from io in the StochOptFormat file format.

See also: evaluate.

Compatibility

Warning

This function is experimental. Things may change between commits. You should not rely on this functionality as a long-term file format (yet).

In addition to potential changes to the underlying format, only a subset of possible modifications are supported. These include:

  • Additive random variables in the constraints or in the objective
  • Multiplicative random variables in the objective

If your model uses something other than this, this function may throw an error or silently build a non-convex model.

Examples

open("my_model.sof.json", "r") do io
     model, validation_scenarios = read(io, PolicyGraph)
-end
source
SDDP.evaluateMethod
evaluate(
     model::PolicyGraph{T},
     validation_scenarios::ValidationScenarios{T,S},
 ) where {T,S}

Evaluate the performance of the policy contained in model after a call to train on the scenarios specified by validation_scenarios.

Examples

model, validation_scenarios = read_from_file("my_model.sof.json")
 train(model; iteration_limit = 100)
-simulations = evaluate(model, validation_scenarios)
source
SDDP.ValidationScenariosType
ValidationScenario{T,S}(scenarios::Vector{ValidationScenario{T,S}})

An AbstractSamplingScheme based on a vector of scenarios.

Each scenario is a vector of Tuple{T, S} where the first element is the node to visit and the second element is the realization of the stagewise-independent noise term. Pass nothing if the node is deterministic.

source
+simulations = evaluate(model, validation_scenarios)
source
SDDP.ValidationScenariosType
ValidationScenario{T,S}(scenarios::Vector{ValidationScenario{T,S}})

An AbstractSamplingScheme based on a vector of scenarios.

Each scenario is a vector of Tuple{T, S} where the first element is the node to visit and the second element is the realization of the stagewise-independent noise term. Pass nothing if the node is deterministic.

source
diff --git a/previews/PR797/assets/documenter.js b/previews/PR797/assets/documenter.js index 82252a11d..7d68cd808 100644 --- a/previews/PR797/assets/documenter.js +++ b/previews/PR797/assets/documenter.js @@ -612,176 +612,194 @@ function worker_function(documenterSearchIndex, documenterBaseURL, filters) { }; } -// `worker = Threads.@spawn worker_function(documenterSearchIndex)`, but in JavaScript! -const filters = [ - ...new Set(documenterSearchIndex["docs"].map((x) => x.category)), -]; -const worker_str = - "(" + - worker_function.toString() + - ")(" + - JSON.stringify(documenterSearchIndex["docs"]) + - "," + - JSON.stringify(documenterBaseURL) + - "," + - JSON.stringify(filters) + - ")"; -const worker_blob = new Blob([worker_str], { type: "text/javascript" }); -const worker = new Worker(URL.createObjectURL(worker_blob)); - /////// SEARCH MAIN /////// -// Whether the worker is currently handling a search. This is a boolean -// as the worker only ever handles 1 or 0 searches at a time. -var worker_is_running = false; - -// The last search text that was sent to the worker. This is used to determine -// if the worker should be launched again when it reports back results. -var last_search_text = ""; - -// The results of the last search. This, in combination with the state of the filters -// in the DOM, is used compute the results to display on calls to update_search. -var unfiltered_results = []; - -// Which filter is currently selected -var selected_filter = ""; - -$(document).on("input", ".documenter-search-input", function (event) { - if (!worker_is_running) { - launch_search(); - } -}); - -function launch_search() { - worker_is_running = true; - last_search_text = $(".documenter-search-input").val(); - worker.postMessage(last_search_text); -} - -worker.onmessage = function (e) { - if (last_search_text !== $(".documenter-search-input").val()) { - launch_search(); - } else { - worker_is_running = false; - } - - unfiltered_results = e.data; - update_search(); -}; +function runSearchMainCode() { + // `worker = Threads.@spawn worker_function(documenterSearchIndex)`, but in JavaScript! + const filters = [ + ...new Set(documenterSearchIndex["docs"].map((x) => x.category)), + ]; + const worker_str = + "(" + + worker_function.toString() + + ")(" + + JSON.stringify(documenterSearchIndex["docs"]) + + "," + + JSON.stringify(documenterBaseURL) + + "," + + JSON.stringify(filters) + + ")"; + const worker_blob = new Blob([worker_str], { type: "text/javascript" }); + const worker = new Worker(URL.createObjectURL(worker_blob)); + + // Whether the worker is currently handling a search. This is a boolean + // as the worker only ever handles 1 or 0 searches at a time. + var worker_is_running = false; + + // The last search text that was sent to the worker. This is used to determine + // if the worker should be launched again when it reports back results. + var last_search_text = ""; + + // The results of the last search. This, in combination with the state of the filters + // in the DOM, is used compute the results to display on calls to update_search. + var unfiltered_results = []; + + // Which filter is currently selected + var selected_filter = ""; + + $(document).on("input", ".documenter-search-input", function (event) { + if (!worker_is_running) { + launch_search(); + } + }); -$(document).on("click", ".search-filter", function () { - if ($(this).hasClass("search-filter-selected")) { - selected_filter = ""; - } else { - selected_filter = $(this).text().toLowerCase(); + function launch_search() { + worker_is_running = true; + last_search_text = $(".documenter-search-input").val(); + worker.postMessage(last_search_text); } - // This updates search results and toggles classes for UI: - update_search(); -}); + worker.onmessage = function (e) { + if (last_search_text !== $(".documenter-search-input").val()) { + launch_search(); + } else { + worker_is_running = false; + } -/** - * Make/Update the search component - */ -function update_search() { - let querystring = $(".documenter-search-input").val(); + unfiltered_results = e.data; + update_search(); + }; - if (querystring.trim()) { - if (selected_filter == "") { - results = unfiltered_results; + $(document).on("click", ".search-filter", function () { + if ($(this).hasClass("search-filter-selected")) { + selected_filter = ""; } else { - results = unfiltered_results.filter((result) => { - return selected_filter == result.category.toLowerCase(); - }); + selected_filter = $(this).text().toLowerCase(); } - let search_result_container = ``; - let modal_filters = make_modal_body_filters(); - let search_divider = `
`; + // This updates search results and toggles classes for UI: + update_search(); + }); - if (results.length) { - let links = []; - let count = 0; - let search_results = ""; - - for (var i = 0, n = results.length; i < n && count < 200; ++i) { - let result = results[i]; - if (result.location && !links.includes(result.location)) { - search_results += result.div; - count++; - links.push(result.location); - } - } + /** + * Make/Update the search component + */ + function update_search() { + let querystring = $(".documenter-search-input").val(); - if (count == 1) { - count_str = "1 result"; - } else if (count == 200) { - count_str = "200+ results"; + if (querystring.trim()) { + if (selected_filter == "") { + results = unfiltered_results; } else { - count_str = count + " results"; + results = unfiltered_results.filter((result) => { + return selected_filter == result.category.toLowerCase(); + }); } - let result_count = `
${count_str}
`; - search_result_container = ` + let search_result_container = ``; + let modal_filters = make_modal_body_filters(); + let search_divider = `
`; + + if (results.length) { + let links = []; + let count = 0; + let search_results = ""; + + for (var i = 0, n = results.length; i < n && count < 200; ++i) { + let result = results[i]; + if (result.location && !links.includes(result.location)) { + search_results += result.div; + count++; + links.push(result.location); + } + } + + if (count == 1) { + count_str = "1 result"; + } else if (count == 200) { + count_str = "200+ results"; + } else { + count_str = count + " results"; + } + let result_count = `
${count_str}
`; + + search_result_container = ` +
+ ${modal_filters} + ${search_divider} + ${result_count} +
+ ${search_results} +
+
+ `; + } else { + search_result_container = `
${modal_filters} ${search_divider} - ${result_count} -
- ${search_results} -
-
+
0 result(s)
+ +
No result found!
`; - } else { - search_result_container = ` -
- ${modal_filters} - ${search_divider} -
0 result(s)
-
-
No result found!
- `; - } + } - if ($(".search-modal-card-body").hasClass("is-justify-content-center")) { - $(".search-modal-card-body").removeClass("is-justify-content-center"); - } + if ($(".search-modal-card-body").hasClass("is-justify-content-center")) { + $(".search-modal-card-body").removeClass("is-justify-content-center"); + } - $(".search-modal-card-body").html(search_result_container); - } else { - if (!$(".search-modal-card-body").hasClass("is-justify-content-center")) { - $(".search-modal-card-body").addClass("is-justify-content-center"); + $(".search-modal-card-body").html(search_result_container); + } else { + if (!$(".search-modal-card-body").hasClass("is-justify-content-center")) { + $(".search-modal-card-body").addClass("is-justify-content-center"); + } + + $(".search-modal-card-body").html(` +
Type something to get started!
+ `); } + } - $(".search-modal-card-body").html(` -
Type something to get started!
- `); + /** + * Make the modal filter html + * + * @returns string + */ + function make_modal_body_filters() { + let str = filters + .map((val) => { + if (selected_filter == val.toLowerCase()) { + return `${val}`; + } else { + return `${val}`; + } + }) + .join(""); + + return ` +
+ Filters: + ${str} +
`; } } -/** - * Make the modal filter html - * - * @returns string - */ -function make_modal_body_filters() { - let str = filters - .map((val) => { - if (selected_filter == val.toLowerCase()) { - return `${val}`; - } else { - return `${val}`; - } - }) - .join(""); - - return ` -
- Filters: - ${str} -
`; +function waitUntilSearchIndexAvailable() { + // It is possible that the documenter.js script runs before the page + // has finished loading and documenterSearchIndex gets defined. + // So we need to wait until the search index actually loads before setting + // up all the search-related stuff. + if (typeof documenterSearchIndex !== "undefined") { + runSearchMainCode(); + } else { + console.warn("Search Index not available, waiting"); + setTimeout(waitUntilSearchIndexAvailable, 1000); + } } +// The actual entry point to the search code +waitUntilSearchIndexAvailable(); + }) //////////////////////////////////////////////////////////////////////////////// require(['jquery'], function($) { diff --git a/previews/PR797/changelog/index.html b/previews/PR797/changelog/index.html index 5174f2c73..e8e056ce0 100644 --- a/previews/PR797/changelog/index.html +++ b/previews/PR797/changelog/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Release notes

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

v1.9.0 (October 17, 2024)

Added

Fixed

  • Fixed the tests to skip threading tests if running in serial (#770)
  • Fixed BanditDuality to handle the case where the standard deviation is NaN (#779)
  • Fixed an error when lagged state variables are encountered in MSPFormat (#786)
  • Fixed publication_plot with replications of different lengths (#788)
  • Fixed CTRL+C interrupting the code at unsafe points (#789)

Other

  • Documentation improvements (#771) (#772)
  • Updated printing because of changes in JuMP (#773)

v1.8.1 (August 5, 2024)

Fixed

  • Fixed various issues with SDDP.Threaded() (#761)
  • Fixed a deprecation warning for sorting a dictionary (#763)

Other

  • Updated copyright notices (#762)
  • Updated .JuliaFormatter.toml (#764)

v1.8.0 (July 24, 2024)

Added

  • Added SDDP.Threaded(), which is an experimental parallel scheme that supports solving problems using multiple threads. Some parts of SDDP.jl may not be thread-safe, and this can cause incorrect results, segfaults, or other errors. Please use with care and report any issues by opening a GitHub issue. (#758)

Other

  • Documentation improvements and fixes (#747) (#759)

v1.7.0 (June 4, 2024)

Added

  • Added sample_backward_noise_terms_with_state for creating backward pass sampling schemes that depend on the current primal state. (#742) (Thanks @arthur-brigatto)

Fixed

  • Fixed error message when publication_plot has non-finite data (#738)

Other

  • Updated the logo constructor (#730)

v1.6.7 (February 1, 2024)

Fixed

  • Fixed non-constant state dimension in the MSPFormat reader (#695)
  • Fixed SimulatorSamplingScheme for deterministic nodes (#710)
  • Fixed line search in BFGS (#711)
  • Fixed handling of NEARLY_FEASIBLE_POINT status (#726)

Other

  • Documentation improvements (#692) (#694) (#706) (#716) (#727)
  • Updated to StochOptFormat v1.0 (#705)
  • Added an experimental OuterApproximation algorithm (#709)
  • Updated .gitignore (#717)
  • Added code for MDP paper (#720) (#721)
  • Added Google analytics (#723)

v1.6.6 (September 29, 2023)

Other

v1.6.5 (September 25, 2023)

Fixed

Other

  • Updated tutorials (#677) (#678) (#682) (#683)
  • Fixed documentation preview (#679)

v1.6.4 (September 23, 2023)

Fixed

Other

  • Documentation updates (#658) (#666) (#671)
  • Switch to GitHub action for deploying docs (#668) (#670)
  • Update to Documenter@1 (#669)

v1.6.3 (September 8, 2023)

Fixed

  • Fixed default stopping rule with iteration_limit or time_limit set (#662)

Other

  • Various documentation improvements (#651) (#657) (#659) (#660)

v1.6.2 (August 24, 2023)

Fixed

  • MSPFormat now detect and exploit stagewise independent lattices (#653)
  • Fixed set_optimizer for models read from file (#654)

Other

  • Fixed typo in pglib_opf.jl (#647)
  • Fixed documentation build and added color (#652)

v1.6.1 (July 20, 2023)

Fixed

  • Fixed bugs in MSPFormat reader (#638) (#639)

Other

  • Clarified OutOfSampleMonteCarlo docstring (#643)

v1.6.0 (July 3, 2023)

Added

Other

v1.5.1 (June 30, 2023)

This release contains a number of minor code changes, but it has a large impact on the content that is printed to screen. In particular, we now log periodically, instead of each iteration, and a "good" stopping rule is used as the default if none are specified. Try using SDDP.train(model) to see the difference.

Other

  • Fixed various typos in the documentation (#617)
  • Fixed printing test after changes in JuMP (#618)
  • Set SimulationStoppingRule as the default stopping rule (#619)
  • Changed the default logging frequency. Pass log_every_seconds = 0.0 to train to revert to the old behavior. (#620)
  • Added example usage with Distributions.jl (@slwu89) (#622)
  • Removed the numerical issue @warn (#627)
  • Improved the quality of docstrings (#630)

v1.5.0 (May 14, 2023)

Added

  • Added the ability to use a different model for the forward pass. This is a novel feature that lets you train better policies when the model is non-convex or does not have a well-defined dual. See the Alternative forward models tutorial in which we train convex and non-convex formulations of the optimal power flow problem. (#611)

Other

  • Updated missing changelog entries (#608)
  • Removed global variables (#610)
  • Converted the Options struct to keyword arguments. This struct was a private implementation detail, but the change is breaking if you developed an extension to SDDP that touched these internals. (#612)
  • Fixed some typos (#613)

v1.4.0 (May 8, 2023)

Added

Fixed

  • Fixed parsing of some MSPFormat files (#602) (#604)
  • Fixed printing in header (#605)

v1.3.0 (May 3, 2023)

Added

  • Added experimental support for SDDP.MSPFormat.read_from_file (#593)

Other

  • Updated to StochOptFormat v0.3 (#600)

v1.2.1 (May 1, 2023)

Fixed

  • Fixed log_every_seconds (#597)

v1.2.0 (May 1, 2023)

Added

Other

  • Tweaked how the log is printed (#588)
  • Updated to StochOptFormat v0.2 (#592)

v1.1.4 (April 10, 2023)

Fixed

  • Logs are now flushed every iteration (#584)

Other

  • Added docstrings to various functions (#581)
  • Minor documentation updates (#580)
  • Clarified integrality documentation (#582)
  • Updated the README (#585)
  • Number of numerical issues is now printed to the log (#586)

v1.1.3 (April 2, 2023)

Other

v1.1.2 (March 18, 2023)

Other

v1.1.1 (March 16, 2023)

Other

  • Fixed email in Project.toml
  • Added notebook to documentation tutorials (#571)

v1.1.0 (January 12, 2023)

Added

v1.0.0 (January 3, 2023)

Although we're bumping MAJOR version, this is a non-breaking release. Going forward:

  • New features will bump the MINOR version
  • Bug fixes, maintenance, and documentation updates will bump the PATCH version
  • We will support only the Long Term Support (currently v1.6.7) and the latest patch (currently v1.8.4) releases of Julia. Updates to the LTS version will bump the MINOR version
  • Updates to the compat bounds of package dependencies will bump the PATCH version.

We do not intend any breaking changes to the public API, which would require a new MAJOR release. The public API is everything defined in the documentation. Anything not in the documentation is considered private and may change in any PATCH release.

Added

Other

  • Updated Plotting tools to use live plots (#563)
  • Added vale as a linter (#565)
  • Improved documentation for initializing a parallel scheme (#566)

v0.4.9 (January 3, 2023)

Added

Other

  • Added tutorial on Markov Decision Processes (#556)
  • Added two-stage newsvendor tutorial (#557)
  • Refactored the layout of the documentation (#554) (#555)
  • Updated copyright to 2023 (#558)
  • Fixed errors in the documentation (#561)

v0.4.8 (December 19, 2022)

Added

Fixed

  • Reverted then fixed (#531) because it failed to account for problems with integer variables (#546) (#551)

v0.4.7 (December 17, 2022)

Added

  • Added initial_node support to InSampleMonteCarlo and OutOfSampleMonteCarlo (#535)

Fixed

  • Rethrow InterruptException when solver is interrupted (#534)
  • Fixed numerical recovery when we need dual solutions (#531) (Thanks @bfpc)
  • Fixed re-using the dashboard = true option between solves (#538)
  • Fixed bug when no @stageobjective is set (now defaults to 0.0) (#539)
  • Fixed errors thrown when invalid inputs are provided to add_objective_state (#540)

Other

  • Drop support for Julia versions prior to 1.6 (#533)
  • Updated versions of dependencies (#522) (#533)
  • Switched to HiGHS in the documentation and tests (#533)
  • Added license headers (#519)
  • Fixed link in air conditioning example (#521) (Thanks @conema)
  • Clarified variable naming in deterministic equivalent (#525) (Thanks @lucasprocessi)
  • Added this change log (#536)
  • Cuts are now written to model.cuts.json when numerical instability is discovered. This can aid debugging because it allows to you reload the cuts as of the iteration that caused the numerical issue (#537)

v0.4.6 (March 25, 2022)

Other

  • Updated to JuMP v1.0 (#517)

v0.4.5 (March 9, 2022)

Fixed

  • Fixed issue with set_silent in a subproblem (#510)

Other

  • Fixed many typos (#500) (#501) (#506) (#511) (Thanks @bfpc)
  • Update to JuMP v0.23 (#514)
  • Added auto-regressive tutorial (#507)

v0.4.4 (December 11, 2021)

Added

  • Added BanditDuality (#471)
  • Added benchmark scripts (#475) (#476) (#490)
  • write_cuts_to_file now saves visited states (#468)

Fixed

  • Fixed BoundStalling in a deterministic policy (#470) (#474)
  • Fixed magnitude warning with zero coefficients (#483)

Other

  • Improvements to LagrangianDuality (#481) (#482) (#487)
  • Improvements to StrengthenedConicDuality (#486)
  • Switch to functional form for the tests (#478)
  • Fixed typos (#472) (Thanks @vfdev-5)
  • Update to JuMP v0.22 (#498)

v0.4.3 (August 31, 2021)

Added

  • Added biobjective solver (#462)
  • Added forward_pass_callback (#466)

Other

  • Update tutorials and documentation (#459) (#465)
  • Organize how paper materials are stored (#464)

v0.4.2 (August 24, 2021)

Fixed

  • Fixed a bug in Lagrangian duality (#457)

v0.4.1 (August 23, 2021)

Other

  • Minor changes to our implementation of LagrangianDuality (#454) (#455)

v0.4.0 (August 17, 2021)

Breaking

  • A large refactoring for how we handle stochastic integer programs. This added support for things like SDDP.ContinuousConicDuality and SDDP.LagrangianDuality. It was breaking because we removed the integrality_handler argument to PolicyGraph. (#449) (#453)

Other

  • Documentation improvements (#447) (#448) (#450)

v0.3.17 (July 6, 2021)

Added

Other

  • Display more model attributes (#438)
  • Documentation improvements (#433) (#437) (#439)

v0.3.16 (June 17, 2021)

Added

Other

  • Update risk measure docstrings (#418)

v0.3.15 (June 1, 2021)

Added

Fixed

  • Fixed scoping bug in SDDP.@stageobjective (#407)
  • Fixed a bug when the initial point is infeasible (#411)
  • Set subproblems to silent by default (#409)

Other

  • Add JuliaFormatter (#412)
  • Documentation improvements (#406) (#408)

v0.3.14 (March 30, 2021)

Fixed

  • Fixed O(N^2) behavior in get_same_children (#393)

v0.3.13 (March 27, 2021)

Fixed

  • Fixed bug in print.jl
  • Fixed compat of Reexport (#388)

v0.3.12 (March 22, 2021)

Added

  • Added problem statistics to header (#385) (#386)

Fixed

  • Fixed subtypes in visualization (#384)

v0.3.11 (March 22, 2021)

Fixed

  • Fixed constructor in direct mode (#383)

Other

  • Fix documentation (#379)

v0.3.10 (February 23, 2021)

Fixed

  • Fixed seriescolor in publication plot (#376)

v0.3.9 (February 20, 2021)

Added

  • Add option to simulate with different incoming state (#372)
  • Added warning for cuts with high dynamic range (#373)

Fixed

  • Fixed seriesalpha in publication plot (#375)

v0.3.8 (January 19, 2021)

Other

  • Documentation improvements (#367) (#369) (#370)

v0.3.7 (January 8, 2021)

Other

  • Documentation improvements (#362) (#363) (#365) (#366)
  • Bump copyright (#364)

v0.3.6 (December 17, 2020)

Other

  • Fix typos (#358)
  • Collapse navigation bar in docs (#359)
  • Update TagBot.yml (#361)

v0.3.5 (November 18, 2020)

Other

  • Update citations (#348)
  • Switch to GitHub actions (#355)

v0.3.4 (August 25, 2020)

Added

  • Added non-uniform distributionally robust risk measure (#328)
  • Added numerical recovery functions (#330)
  • Added experimental StochOptFormat (#332) (#336) (#337) (#341) (#343) (#344)
  • Added entropic risk measure (#347)

Other

  • Documentation improvements (#327) (#333) (#339) (#340)

v0.3.3 (June 19, 2020)

Added

  • Added asynchronous support for price and belief states (#325)
  • Added ForwardPass plug-in system (#320)

Fixed

  • Fix check for probabilities in Markovian graph (#322)

v0.3.2 (April 6, 2020)

Added

Other

  • Improve error message in deterministic equivalent (#312)
  • Update to RecipesBase 1.0 (#313)

v0.3.1 (February 26, 2020)

Fixed

  • Fixed filename in integrality_handlers.jl (#304)

v0.3.0 (February 20, 2020)

Breaking

  • Breaking changes to update to JuMP v0.21 (#300).

v0.2.4 (February 7, 2020)

Added

  • Added a counter for the number of total subproblem solves (#301)

Other

  • Update formatter (#298)
  • Added tests (#299)

v0.2.3 (January 24, 2020)

Added

  • Added support for convex risk measures (#294)

Fixed

  • Fixed bug when subproblem is infeasible (#296)
  • Fixed bug in deterministic equivalent (#297)

Other

  • Added example from IJOC paper (#293)

v0.2.2 (January 10, 2020)

Fixed

  • Fixed flakey time limit in tests (#291)

Other

  • Removed MathOptFormat.jl (#289)
  • Update copyright (#290)

v0.2.1 (December 19, 2019)

Added

  • Added support for approximating a Markov lattice (#282) (#285)
  • Add tools for visualizing the value function (#272) (#286)
  • Write .mof.json files on error (#284)

Other

  • Improve documentation (#281) (#283)
  • Update tests for Julia 1.3 (#287)

v0.2.0 (December 16, 2019)

This version added the asynchronous parallel implementation with a few minor breaking changes in how we iterated internally. It didn't break basic user-facing models, only implementations that implemented some of the extension features. It probably could have been a v1.1 release.

Added

  • Added asynchronous parallel implementation (#277)
  • Added roll-out algorithm for cyclic graphs (#279)

Other

  • Improved error messages in PolicyGraph (#271)
  • Added JuliaFormatter (#273) (#276)
  • Fixed compat bounds (#274) (#278)
  • Added documentation for simulating non-standard graphs (#280)

v0.1.0 (October 17, 2019)

A complete rewrite of SDDP.jl based on the policy graph framework. This was essentially a new package. It has minimal code in common with the previous implementation.

Development started on September 28, 2018 in Kokako.jl, and the code was merged into SDDP.jl on March 14, 2019.

The pull request SDDP.jl#180 lists the 29 issues that the rewrite closed.

v0.0.1 (April 18, 2018)

Initial release. Development had been underway since January 22, 2016 in the StochDualDynamicProgram.jl repository. The last development commit there was April 5, 2017. Work then continued in this repository for a year before the first tagged release.

+

Release notes

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

v1.9.0 (October 17, 2024)

Added

Fixed

  • Fixed the tests to skip threading tests if running in serial (#770)
  • Fixed BanditDuality to handle the case where the standard deviation is NaN (#779)
  • Fixed an error when lagged state variables are encountered in MSPFormat (#786)
  • Fixed publication_plot with replications of different lengths (#788)
  • Fixed CTRL+C interrupting the code at unsafe points (#789)

Other

  • Documentation improvements (#771) (#772)
  • Updated printing because of changes in JuMP (#773)

v1.8.1 (August 5, 2024)

Fixed

  • Fixed various issues with SDDP.Threaded() (#761)
  • Fixed a deprecation warning for sorting a dictionary (#763)

Other

  • Updated copyright notices (#762)
  • Updated .JuliaFormatter.toml (#764)

v1.8.0 (July 24, 2024)

Added

  • Added SDDP.Threaded(), which is an experimental parallel scheme that supports solving problems using multiple threads. Some parts of SDDP.jl may not be thread-safe, and this can cause incorrect results, segfaults, or other errors. Please use with care and report any issues by opening a GitHub issue. (#758)

Other

  • Documentation improvements and fixes (#747) (#759)

v1.7.0 (June 4, 2024)

Added

  • Added sample_backward_noise_terms_with_state for creating backward pass sampling schemes that depend on the current primal state. (#742) (Thanks @arthur-brigatto)

Fixed

  • Fixed error message when publication_plot has non-finite data (#738)

Other

  • Updated the logo constructor (#730)

v1.6.7 (February 1, 2024)

Fixed

  • Fixed non-constant state dimension in the MSPFormat reader (#695)
  • Fixed SimulatorSamplingScheme for deterministic nodes (#710)
  • Fixed line search in BFGS (#711)
  • Fixed handling of NEARLY_FEASIBLE_POINT status (#726)

Other

  • Documentation improvements (#692) (#694) (#706) (#716) (#727)
  • Updated to StochOptFormat v1.0 (#705)
  • Added an experimental OuterApproximation algorithm (#709)
  • Updated .gitignore (#717)
  • Added code for MDP paper (#720) (#721)
  • Added Google analytics (#723)

v1.6.6 (September 29, 2023)

Other

v1.6.5 (September 25, 2023)

Fixed

Other

  • Updated tutorials (#677) (#678) (#682) (#683)
  • Fixed documentation preview (#679)

v1.6.4 (September 23, 2023)

Fixed

Other

  • Documentation updates (#658) (#666) (#671)
  • Switch to GitHub action for deploying docs (#668) (#670)
  • Update to Documenter@1 (#669)

v1.6.3 (September 8, 2023)

Fixed

  • Fixed default stopping rule with iteration_limit or time_limit set (#662)

Other

  • Various documentation improvements (#651) (#657) (#659) (#660)

v1.6.2 (August 24, 2023)

Fixed

  • MSPFormat now detect and exploit stagewise independent lattices (#653)
  • Fixed set_optimizer for models read from file (#654)

Other

  • Fixed typo in pglib_opf.jl (#647)
  • Fixed documentation build and added color (#652)

v1.6.1 (July 20, 2023)

Fixed

  • Fixed bugs in MSPFormat reader (#638) (#639)

Other

  • Clarified OutOfSampleMonteCarlo docstring (#643)

v1.6.0 (July 3, 2023)

Added

Other

v1.5.1 (June 30, 2023)

This release contains a number of minor code changes, but it has a large impact on the content that is printed to screen. In particular, we now log periodically, instead of each iteration, and a "good" stopping rule is used as the default if none are specified. Try using SDDP.train(model) to see the difference.

Other

  • Fixed various typos in the documentation (#617)
  • Fixed printing test after changes in JuMP (#618)
  • Set SimulationStoppingRule as the default stopping rule (#619)
  • Changed the default logging frequency. Pass log_every_seconds = 0.0 to train to revert to the old behavior. (#620)
  • Added example usage with Distributions.jl (@slwu89) (#622)
  • Removed the numerical issue @warn (#627)
  • Improved the quality of docstrings (#630)

v1.5.0 (May 14, 2023)

Added

  • Added the ability to use a different model for the forward pass. This is a novel feature that lets you train better policies when the model is non-convex or does not have a well-defined dual. See the Alternative forward models tutorial in which we train convex and non-convex formulations of the optimal power flow problem. (#611)

Other

  • Updated missing changelog entries (#608)
  • Removed global variables (#610)
  • Converted the Options struct to keyword arguments. This struct was a private implementation detail, but the change is breaking if you developed an extension to SDDP that touched these internals. (#612)
  • Fixed some typos (#613)

v1.4.0 (May 8, 2023)

Added

Fixed

  • Fixed parsing of some MSPFormat files (#602) (#604)
  • Fixed printing in header (#605)

v1.3.0 (May 3, 2023)

Added

  • Added experimental support for SDDP.MSPFormat.read_from_file (#593)

Other

  • Updated to StochOptFormat v0.3 (#600)

v1.2.1 (May 1, 2023)

Fixed

  • Fixed log_every_seconds (#597)

v1.2.0 (May 1, 2023)

Added

Other

  • Tweaked how the log is printed (#588)
  • Updated to StochOptFormat v0.2 (#592)

v1.1.4 (April 10, 2023)

Fixed

  • Logs are now flushed every iteration (#584)

Other

  • Added docstrings to various functions (#581)
  • Minor documentation updates (#580)
  • Clarified integrality documentation (#582)
  • Updated the README (#585)
  • Number of numerical issues is now printed to the log (#586)

v1.1.3 (April 2, 2023)

Other

v1.1.2 (March 18, 2023)

Other

v1.1.1 (March 16, 2023)

Other

  • Fixed email in Project.toml
  • Added notebook to documentation tutorials (#571)

v1.1.0 (January 12, 2023)

Added

v1.0.0 (January 3, 2023)

Although we're bumping MAJOR version, this is a non-breaking release. Going forward:

  • New features will bump the MINOR version
  • Bug fixes, maintenance, and documentation updates will bump the PATCH version
  • We will support only the Long Term Support (currently v1.6.7) and the latest patch (currently v1.8.4) releases of Julia. Updates to the LTS version will bump the MINOR version
  • Updates to the compat bounds of package dependencies will bump the PATCH version.

We do not intend any breaking changes to the public API, which would require a new MAJOR release. The public API is everything defined in the documentation. Anything not in the documentation is considered private and may change in any PATCH release.

Added

Other

  • Updated Plotting tools to use live plots (#563)
  • Added vale as a linter (#565)
  • Improved documentation for initializing a parallel scheme (#566)

v0.4.9 (January 3, 2023)

Added

Other

  • Added tutorial on Markov Decision Processes (#556)
  • Added two-stage newsvendor tutorial (#557)
  • Refactored the layout of the documentation (#554) (#555)
  • Updated copyright to 2023 (#558)
  • Fixed errors in the documentation (#561)

v0.4.8 (December 19, 2022)

Added

Fixed

  • Reverted then fixed (#531) because it failed to account for problems with integer variables (#546) (#551)

v0.4.7 (December 17, 2022)

Added

  • Added initial_node support to InSampleMonteCarlo and OutOfSampleMonteCarlo (#535)

Fixed

  • Rethrow InterruptException when solver is interrupted (#534)
  • Fixed numerical recovery when we need dual solutions (#531) (Thanks @bfpc)
  • Fixed re-using the dashboard = true option between solves (#538)
  • Fixed bug when no @stageobjective is set (now defaults to 0.0) (#539)
  • Fixed errors thrown when invalid inputs are provided to add_objective_state (#540)

Other

  • Drop support for Julia versions prior to 1.6 (#533)
  • Updated versions of dependencies (#522) (#533)
  • Switched to HiGHS in the documentation and tests (#533)
  • Added license headers (#519)
  • Fixed link in air conditioning example (#521) (Thanks @conema)
  • Clarified variable naming in deterministic equivalent (#525) (Thanks @lucasprocessi)
  • Added this change log (#536)
  • Cuts are now written to model.cuts.json when numerical instability is discovered. This can aid debugging because it allows to you reload the cuts as of the iteration that caused the numerical issue (#537)

v0.4.6 (March 25, 2022)

Other

  • Updated to JuMP v1.0 (#517)

v0.4.5 (March 9, 2022)

Fixed

  • Fixed issue with set_silent in a subproblem (#510)

Other

  • Fixed many typos (#500) (#501) (#506) (#511) (Thanks @bfpc)
  • Update to JuMP v0.23 (#514)
  • Added auto-regressive tutorial (#507)

v0.4.4 (December 11, 2021)

Added

  • Added BanditDuality (#471)
  • Added benchmark scripts (#475) (#476) (#490)
  • write_cuts_to_file now saves visited states (#468)

Fixed

  • Fixed BoundStalling in a deterministic policy (#470) (#474)
  • Fixed magnitude warning with zero coefficients (#483)

Other

  • Improvements to LagrangianDuality (#481) (#482) (#487)
  • Improvements to StrengthenedConicDuality (#486)
  • Switch to functional form for the tests (#478)
  • Fixed typos (#472) (Thanks @vfdev-5)
  • Update to JuMP v0.22 (#498)

v0.4.3 (August 31, 2021)

Added

  • Added biobjective solver (#462)
  • Added forward_pass_callback (#466)

Other

  • Update tutorials and documentation (#459) (#465)
  • Organize how paper materials are stored (#464)

v0.4.2 (August 24, 2021)

Fixed

  • Fixed a bug in Lagrangian duality (#457)

v0.4.1 (August 23, 2021)

Other

  • Minor changes to our implementation of LagrangianDuality (#454) (#455)

v0.4.0 (August 17, 2021)

Breaking

  • A large refactoring for how we handle stochastic integer programs. This added support for things like SDDP.ContinuousConicDuality and SDDP.LagrangianDuality. It was breaking because we removed the integrality_handler argument to PolicyGraph. (#449) (#453)

Other

  • Documentation improvements (#447) (#448) (#450)

v0.3.17 (July 6, 2021)

Added

Other

  • Display more model attributes (#438)
  • Documentation improvements (#433) (#437) (#439)

v0.3.16 (June 17, 2021)

Added

Other

  • Update risk measure docstrings (#418)

v0.3.15 (June 1, 2021)

Added

Fixed

  • Fixed scoping bug in SDDP.@stageobjective (#407)
  • Fixed a bug when the initial point is infeasible (#411)
  • Set subproblems to silent by default (#409)

Other

  • Add JuliaFormatter (#412)
  • Documentation improvements (#406) (#408)

v0.3.14 (March 30, 2021)

Fixed

  • Fixed O(N^2) behavior in get_same_children (#393)

v0.3.13 (March 27, 2021)

Fixed

  • Fixed bug in print.jl
  • Fixed compat of Reexport (#388)

v0.3.12 (March 22, 2021)

Added

  • Added problem statistics to header (#385) (#386)

Fixed

  • Fixed subtypes in visualization (#384)

v0.3.11 (March 22, 2021)

Fixed

  • Fixed constructor in direct mode (#383)

Other

  • Fix documentation (#379)

v0.3.10 (February 23, 2021)

Fixed

  • Fixed seriescolor in publication plot (#376)

v0.3.9 (February 20, 2021)

Added

  • Add option to simulate with different incoming state (#372)
  • Added warning for cuts with high dynamic range (#373)

Fixed

  • Fixed seriesalpha in publication plot (#375)

v0.3.8 (January 19, 2021)

Other

  • Documentation improvements (#367) (#369) (#370)

v0.3.7 (January 8, 2021)

Other

  • Documentation improvements (#362) (#363) (#365) (#366)
  • Bump copyright (#364)

v0.3.6 (December 17, 2020)

Other

  • Fix typos (#358)
  • Collapse navigation bar in docs (#359)
  • Update TagBot.yml (#361)

v0.3.5 (November 18, 2020)

Other

  • Update citations (#348)
  • Switch to GitHub actions (#355)

v0.3.4 (August 25, 2020)

Added

  • Added non-uniform distributionally robust risk measure (#328)
  • Added numerical recovery functions (#330)
  • Added experimental StochOptFormat (#332) (#336) (#337) (#341) (#343) (#344)
  • Added entropic risk measure (#347)

Other

  • Documentation improvements (#327) (#333) (#339) (#340)

v0.3.3 (June 19, 2020)

Added

  • Added asynchronous support for price and belief states (#325)
  • Added ForwardPass plug-in system (#320)

Fixed

  • Fix check for probabilities in Markovian graph (#322)

v0.3.2 (April 6, 2020)

Added

Other

  • Improve error message in deterministic equivalent (#312)
  • Update to RecipesBase 1.0 (#313)

v0.3.1 (February 26, 2020)

Fixed

  • Fixed filename in integrality_handlers.jl (#304)

v0.3.0 (February 20, 2020)

Breaking

  • Breaking changes to update to JuMP v0.21 (#300).

v0.2.4 (February 7, 2020)

Added

  • Added a counter for the number of total subproblem solves (#301)

Other

  • Update formatter (#298)
  • Added tests (#299)

v0.2.3 (January 24, 2020)

Added

  • Added support for convex risk measures (#294)

Fixed

  • Fixed bug when subproblem is infeasible (#296)
  • Fixed bug in deterministic equivalent (#297)

Other

  • Added example from IJOC paper (#293)

v0.2.2 (January 10, 2020)

Fixed

  • Fixed flakey time limit in tests (#291)

Other

  • Removed MathOptFormat.jl (#289)
  • Update copyright (#290)

v0.2.1 (December 19, 2019)

Added

  • Added support for approximating a Markov lattice (#282) (#285)
  • Add tools for visualizing the value function (#272) (#286)
  • Write .mof.json files on error (#284)

Other

  • Improve documentation (#281) (#283)
  • Update tests for Julia 1.3 (#287)

v0.2.0 (December 16, 2019)

This version added the asynchronous parallel implementation with a few minor breaking changes in how we iterated internally. It didn't break basic user-facing models, only implementations that implemented some of the extension features. It probably could have been a v1.1 release.

Added

  • Added asynchronous parallel implementation (#277)
  • Added roll-out algorithm for cyclic graphs (#279)

Other

  • Improved error messages in PolicyGraph (#271)
  • Added JuliaFormatter (#273) (#276)
  • Fixed compat bounds (#274) (#278)
  • Added documentation for simulating non-standard graphs (#280)

v0.1.0 (October 17, 2019)

A complete rewrite of SDDP.jl based on the policy graph framework. This was essentially a new package. It has minimal code in common with the previous implementation.

Development started on September 28, 2018 in Kokako.jl, and the code was merged into SDDP.jl on March 14, 2019.

The pull request SDDP.jl#180 lists the 29 issues that the rewrite closed.

v0.0.1 (April 18, 2018)

Initial release. Development had been underway since January 22, 2016 in the StochDualDynamicProgram.jl repository. The last development commit there was April 5, 2017. Work then continued in this repository for a year before the first tagged release.

diff --git a/previews/PR797/examples/FAST_hydro_thermal/index.html b/previews/PR797/examples/FAST_hydro_thermal/index.html index 9567b8399..a26ddf48e 100644 --- a/previews/PR797/examples/FAST_hydro_thermal/index.html +++ b/previews/PR797/examples/FAST_hydro_thermal/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

FAST: the hydro-thermal problem

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

An implementation of the Hydro-thermal example from FAST

using SDDP, HiGHS, Test
+

FAST: the hydro-thermal problem

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

An implementation of the Hydro-thermal example from FAST

using SDDP, HiGHS, Test
 
 function fast_hydro_thermal()
     model = SDDP.LinearPolicyGraph(;
@@ -66,13 +66,13 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   0.000000e+00 -1.000000e+01  2.629995e-03         5   1
-        20   0.000000e+00 -1.000000e+01  1.441717e-02       104   1
+         1   0.000000e+00 -1.000000e+01  2.633095e-03         5   1
+        20   0.000000e+00 -1.000000e+01  1.451397e-02       104   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 1.441717e-02
+total time (s) : 1.451397e-02
 total solves   : 104
 best bound     : -1.000000e+01
 simulation ci  : -9.000000e+00 ± 4.474009e+00
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/FAST_production_management/index.html b/previews/PR797/examples/FAST_production_management/index.html index f86ef1670..cae66a0a3 100644 --- a/previews/PR797/examples/FAST_production_management/index.html +++ b/previews/PR797/examples/FAST_production_management/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

FAST: the production management problem

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

An implementation of the Production Management example from FAST

using SDDP, HiGHS, Test
+
+fast_production_management(; cut_type = SDDP.MULTI_CUT)
Test Passed
diff --git a/previews/PR797/examples/FAST_quickstart/index.html b/previews/PR797/examples/FAST_quickstart/index.html index 47247fbb4..8b0087507 100644 --- a/previews/PR797/examples/FAST_quickstart/index.html +++ b/previews/PR797/examples/FAST_quickstart/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

FAST: the quickstart problem

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

An implementation of the QuickStart example from FAST

using SDDP, HiGHS, Test
+
+fast_quickstart()
Test Passed
diff --git a/previews/PR797/examples/Hydro_thermal/index.html b/previews/PR797/examples/Hydro_thermal/index.html index aa7babd5b..4d5825c84 100644 --- a/previews/PR797/examples/Hydro_thermal/index.html +++ b/previews/PR797/examples/Hydro_thermal/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Hydro-thermal scheduling

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

Problem Description

In a hydro-thermal problem, the agent controls a hydro-electric generator and reservoir. Each time period, they need to choose a generation quantity from thermal g_t, and hydro g_h, in order to meet demand w_d, which is a stagewise-independent random variable. The state variable, x, is the quantity of water in the reservoir at the start of each time period, and it has a minimum level of 5 units and a maximum level of 15 units. We assume that there are 10 units of water in the reservoir at the start of time, so that x_0 = 10. The state-variable is connected through time by the water balance constraint: x.out = x.in - g_h - s + w_i, where x.out is the quantity of water at the end of the time period, x.in is the quantity of water at the start of the time period, s is the quantity of water spilled from the reservoir, and w_i is a stagewise-independent random variable that represents the inflow into the reservoir during the time period.

We assume that there are three stages, t=1, 2, 3, representing summer-fall, winter, and spring, and that we are solving this problem in an infinite-horizon setting with a discount factor of 0.95.

In each stage, the agent incurs the cost of spillage, plus the cost of thermal generation. We assume that the cost of thermal generation is dependent on the stage t = 1, 2, 3, and that in each stage, w is drawn from the set (w_i, w_d) = {(0, 7.5), (3, 5), (10, 2.5)} with equal probability.

Importing packages

For this example, in addition to SDDP, we need HiGHS as a solver and Statisitics to compute the mean of our simulations.

using HiGHS
+

Hydro-thermal scheduling

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

Problem Description

In a hydro-thermal problem, the agent controls a hydro-electric generator and reservoir. Each time period, they need to choose a generation quantity from thermal g_t, and hydro g_h, in order to meet demand w_d, which is a stagewise-independent random variable. The state variable, x, is the quantity of water in the reservoir at the start of each time period, and it has a minimum level of 5 units and a maximum level of 15 units. We assume that there are 10 units of water in the reservoir at the start of time, so that x_0 = 10. The state-variable is connected through time by the water balance constraint: x.out = x.in - g_h - s + w_i, where x.out is the quantity of water at the end of the time period, x.in is the quantity of water at the start of the time period, s is the quantity of water spilled from the reservoir, and w_i is a stagewise-independent random variable that represents the inflow into the reservoir during the time period.

We assume that there are three stages, t=1, 2, 3, representing summer-fall, winter, and spring, and that we are solving this problem in an infinite-horizon setting with a discount factor of 0.95.

In each stage, the agent incurs the cost of spillage, plus the cost of thermal generation. We assume that the cost of thermal generation is dependent on the stage t = 1, 2, 3, and that in each stage, w is drawn from the set (w_i, w_d) = {(0, 7.5), (3, 5), (10, 2.5)} with equal probability.

Importing packages

For this example, in addition to SDDP, we need HiGHS as a solver and Statisitics to compute the mean of our simulations.

using HiGHS
 using SDDP
 using Statistics

Constructing the policy graph

There are three stages in our infinite-horizon problem, so we construct a unicyclic policy graph using SDDP.UnicyclicGraph:

graph = SDDP.UnicyclicGraph(0.95; num_nodes = 3)
Root
  0
@@ -59,15 +59,15 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   2.390000e+02  6.304440e+01  1.043630e-01       183   1
-        32   3.987949e+01  2.346450e+02  1.105374e+00      7752   1
-        53   2.848993e+02  2.362264e+02  2.115026e+00     13899   1
-        72   6.195245e+02  2.363946e+02  3.185670e+00     19692   1
-        84   1.975040e+02  2.364251e+02  4.238926e+00     24504   1
-       100   1.135002e+02  2.364293e+02  4.719036e+00     26640   1
+         1   2.390000e+02  6.304440e+01  1.043160e-01       183   1
+        31   8.517170e+02  2.346450e+02  1.104898e+00      7701   1
+        53   2.848993e+02  2.362264e+02  2.144756e+00     13899   1
+        72   6.195245e+02  2.363946e+02  3.242283e+00     19692   1
+        83   1.925059e+02  2.364242e+02  4.302291e+00     24345   1
+       100   1.135002e+02  2.364293e+02  4.850230e+00     26640   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 4.719036e+00
+total time (s) : 4.850230e+00
 total solves   : 26640
 best bound     :  2.364293e+02
 simulation ci  :  2.593398e+02 ± 5.186931e+01
@@ -75,4 +75,4 @@
 -------------------------------------------------------------------

Simulating the policy

After training, we can simulate the policy using SDDP.simulate.

sims = SDDP.simulate(model, 100, [:g_t])
 mu = round(mean([s[1][:g_t] for s in sims]); digits = 2)
 println("On average, $(mu) units of thermal are used in the first stage.")
On average, 1.71 units of thermal are used in the first stage.

Extracting the water values

Finally, we can use SDDP.ValueFunction and SDDP.evaluate to obtain and evaluate the value function at different points in the state-space. Note that since we are minimizing, the price has a negative sign: each additional unit of water leads to a decrease in the expected long-run cost.

V = SDDP.ValueFunction(model[1])
-cost, price = SDDP.evaluate(V; x = 10)
(233.55074662683333, Dict(:x => -0.6602685305287201))
+cost, price = SDDP.evaluate(V; x = 10)
(233.55074662683333, Dict(:x => -0.6602685305287201))
diff --git a/previews/PR797/examples/SDDP.log b/previews/PR797/examples/SDDP.log index 8a5ecc317..c0d8246e6 100644 --- a/previews/PR797/examples/SDDP.log +++ b/previews/PR797/examples/SDDP.log @@ -25,11 +25,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 -1.000000e+01 2.629995e-03 5 1 - 20 0.000000e+00 -1.000000e+01 1.441717e-02 104 1 + 1 0.000000e+00 -1.000000e+01 2.633095e-03 5 1 + 20 0.000000e+00 -1.000000e+01 1.451397e-02 104 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.441717e-02 +total time (s) : 1.451397e-02 total solves : 104 best bound : -1.000000e+01 simulation ci : -9.000000e+00 ± 4.474009e+00 @@ -61,17 +61,17 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 5 -2.396000e+01 -2.396000e+01 7.363081e-03 52 1 - 10 -4.260000e+01 -2.396000e+01 1.099610e-02 92 1 - 15 -4.260000e+01 -2.396000e+01 1.488709e-02 132 1 - 20 -4.260000e+01 -2.396000e+01 1.911211e-02 172 1 - 25 -2.396000e+01 -2.396000e+01 2.440619e-02 224 1 - 30 -4.260000e+01 -2.396000e+01 2.908516e-02 264 1 - 35 -2.396000e+01 -2.396000e+01 3.414798e-02 304 1 - 40 -2.396000e+01 -2.396000e+01 3.956008e-02 344 1 + 5 -2.396000e+01 -2.396000e+01 6.983995e-03 52 1 + 10 -4.260000e+01 -2.396000e+01 1.058793e-02 92 1 + 15 -4.260000e+01 -2.396000e+01 1.432490e-02 132 1 + 20 -4.260000e+01 -2.396000e+01 1.839685e-02 172 1 + 25 -2.396000e+01 -2.396000e+01 2.347684e-02 224 1 + 30 -4.260000e+01 -2.396000e+01 2.964592e-02 264 1 + 35 -2.396000e+01 -2.396000e+01 3.634191e-02 304 1 + 40 -2.396000e+01 -2.396000e+01 4.203892e-02 344 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 3.956008e-02 +total time (s) : 4.203892e-02 total solves : 344 best bound : -2.396000e+01 simulation ci : -2.660914e+01 ± 3.908038e+00 @@ -81,21 +81,21 @@ numeric issues : 0 ──────────────────────────────────────────────────────────────────────────────── Time Allocations ─────────────────────── ──────────────────────── - Tot / % measured: 576ms / 6.0% 33.0MiB / 21.0% + Tot / % measured: 49.6ms / 75.7% 33.3MiB / 21.7% Section ncalls time %tot avg alloc %tot avg ──────────────────────────────────────────────────────────────────────────────── -backward_pass 40 21.7ms 62.5% 544μs 5.96MiB 86.0% 153KiB - solve_subproblem 160 11.9ms 34.2% 74.4μs 871KiB 12.3% 5.44KiB - get_dual_solution 160 622μs 1.8% 3.89μs 190KiB 2.7% 1.19KiB - prepare_backward... 160 26.0μs 0.1% 162ns 0.00B 0.0% 0.00B -forward_pass 40 7.91ms 22.7% 198μs 768KiB 10.8% 19.2KiB - solve_subproblem 120 6.96ms 20.0% 58.0μs 588KiB 8.3% 4.90KiB - get_dual_solution 120 79.7μs 0.2% 664ns 16.9KiB 0.2% 144B - sample_scenario 40 132μs 0.4% 3.29μs 24.5KiB 0.3% 628B -calculate_bound 40 5.10ms 14.7% 127μs 224KiB 3.2% 5.59KiB - get_dual_solution 40 36.6μs 0.1% 915ns 5.62KiB 0.1% 144B -get_dual_solution 36 22.1μs 0.1% 613ns 5.06KiB 0.1% 144B +backward_pass 40 23.6ms 62.8% 590μs 6.24MiB 86.5% 160KiB + solve_subproblem 160 12.8ms 34.1% 80.1μs 871KiB 11.8% 5.44KiB + get_dual_solution 160 639μs 1.7% 3.99μs 190KiB 2.6% 1.19KiB + prepare_backward... 160 32.5μs 0.1% 203ns 0.00B 0.0% 0.00B +forward_pass 40 8.34ms 22.2% 209μs 768KiB 10.4% 19.2KiB + solve_subproblem 120 7.42ms 19.7% 61.9μs 588KiB 8.0% 4.90KiB + get_dual_solution 120 82.1μs 0.2% 684ns 16.9KiB 0.2% 144B + sample_scenario 40 137μs 0.4% 3.44μs 24.5KiB 0.3% 628B +calculate_bound 40 5.63ms 15.0% 141μs 224KiB 3.0% 5.61KiB + get_dual_solution 40 35.5μs 0.1% 888ns 5.62KiB 0.1% 144B +get_dual_solution 36 22.3μs 0.1% 620ns 5.06KiB 0.1% 144B ──────────────────────────────────────────────────────────────────────────────── ------------------------------------------------------------------- @@ -123,17 +123,17 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 5 -5.320000e+00 -2.396000e+01 7.251024e-03 52 1 - 10 -5.320000e+00 -2.396000e+01 1.123786e-02 92 1 - 15 -2.396000e+01 -2.396000e+01 1.563501e-02 132 1 - 20 -5.320000e+00 -2.396000e+01 2.049494e-02 172 1 - 25 -4.260000e+01 -2.396000e+01 2.678490e-02 224 1 - 30 -2.396000e+01 -2.396000e+01 3.270006e-02 264 1 - 35 -2.396000e+01 -2.396000e+01 3.917789e-02 304 1 - 40 -2.396000e+01 -2.396000e+01 4.625201e-02 344 1 + 5 -5.320000e+00 -2.396000e+01 7.585049e-03 52 1 + 10 -5.320000e+00 -2.396000e+01 1.167512e-02 92 1 + 15 -2.396000e+01 -2.396000e+01 4.954910e-02 132 1 + 20 -5.320000e+00 -2.396000e+01 5.459309e-02 172 1 + 25 -4.260000e+01 -2.396000e+01 6.105614e-02 224 1 + 30 -2.396000e+01 -2.396000e+01 6.720304e-02 264 1 + 35 -2.396000e+01 -2.396000e+01 7.380509e-02 304 1 + 40 -2.396000e+01 -2.396000e+01 8.108902e-02 344 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 4.625201e-02 +total time (s) : 8.108902e-02 total solves : 344 best bound : -2.396000e+01 simulation ci : -1.957570e+01 ± 3.890802e+00 @@ -143,21 +143,21 @@ numeric issues : 0 ──────────────────────────────────────────────────────────────────────────────── Time Allocations ─────────────────────── ──────────────────────── - Tot / % measured: 50.6ms / 82.7% 39.0MiB / 33.3% + Tot / % measured: 85.4ms / 50.8% 39.1MiB / 33.4% Section ncalls time %tot avg alloc %tot avg ──────────────────────────────────────────────────────────────────────────────── -backward_pass 40 28.5ms 68.1% 712μs 12.0MiB 92.5% 307KiB - solve_subproblem 160 12.2ms 29.2% 76.2μs 872KiB 6.6% 5.45KiB - get_dual_solution 160 589μs 1.4% 3.68μs 190KiB 1.4% 1.19KiB - prepare_backward... 160 25.2μs 0.1% 157ns 0.00B 0.0% 0.00B -forward_pass 40 7.82ms 18.7% 196μs 768KiB 5.8% 19.2KiB - solve_subproblem 120 6.94ms 16.6% 57.8μs 588KiB 4.4% 4.90KiB - get_dual_solution 120 68.8μs 0.2% 573ns 16.9KiB 0.1% 144B - sample_scenario 40 132μs 0.3% 3.29μs 24.2KiB 0.2% 620B -calculate_bound 40 5.49ms 13.1% 137μs 226KiB 1.7% 5.64KiB - get_dual_solution 40 32.2μs 0.1% 805ns 5.62KiB 0.0% 144B -get_dual_solution 36 19.1μs 0.0% 530ns 5.06KiB 0.0% 144B +backward_pass 40 29.0ms 66.9% 726μs 12.1MiB 92.5% 309KiB + solve_subproblem 160 12.6ms 29.0% 78.8μs 872KiB 6.5% 5.45KiB + get_dual_solution 160 594μs 1.4% 3.71μs 190KiB 1.4% 1.19KiB + prepare_backward... 160 29.2μs 0.1% 183ns 0.00B 0.0% 0.00B +forward_pass 40 8.57ms 19.8% 214μs 768KiB 5.7% 19.2KiB + solve_subproblem 120 7.62ms 17.6% 63.5μs 588KiB 4.4% 4.90KiB + get_dual_solution 120 77.4μs 0.2% 645ns 16.9KiB 0.1% 144B + sample_scenario 40 142μs 0.3% 3.56μs 24.2KiB 0.2% 620B +calculate_bound 40 5.78ms 13.3% 144μs 226KiB 1.7% 5.66KiB + get_dual_solution 40 36.3μs 0.1% 908ns 5.62KiB 0.0% 144B +get_dual_solution 36 22.6μs 0.1% 627ns 5.06KiB 0.0% 144B ──────────────────────────────────────────────────────────────────────────────── ------------------------------------------------------------------- @@ -185,49 +185,49 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 -2.500000e+00 2.010107e-03 5 1 - 2 -1.500000e+00 -2.000000e+00 2.986193e-03 14 1 - 3 -1.000000e+00 -2.000000e+00 3.486156e-03 19 1 - 4 -1.000000e+00 -2.000000e+00 4.073143e-03 24 1 - 5 -2.000000e+00 -2.000000e+00 4.711151e-03 29 1 - 6 -2.000000e+00 -2.000000e+00 5.319118e-03 34 1 - 7 -2.000000e+00 -2.000000e+00 5.902052e-03 39 1 - 8 -2.000000e+00 -2.000000e+00 6.491184e-03 44 1 - 9 -2.000000e+00 -2.000000e+00 7.072210e-03 49 1 - 10 -2.000000e+00 -2.000000e+00 7.661104e-03 54 1 - 11 -2.000000e+00 -2.000000e+00 8.263111e-03 59 1 - 12 -2.000000e+00 -2.000000e+00 8.854151e-03 64 1 - 13 -2.000000e+00 -2.000000e+00 9.504080e-03 69 1 - 14 -2.000000e+00 -2.000000e+00 1.010609e-02 74 1 - 15 -2.000000e+00 -2.000000e+00 1.069999e-02 79 1 - 16 -2.000000e+00 -2.000000e+00 1.132011e-02 84 1 - 17 -2.000000e+00 -2.000000e+00 1.194501e-02 89 1 - 18 -2.000000e+00 -2.000000e+00 1.256800e-02 94 1 - 19 -2.000000e+00 -2.000000e+00 1.319814e-02 99 1 - 20 -2.000000e+00 -2.000000e+00 1.386714e-02 104 1 - 21 -2.000000e+00 -2.000000e+00 1.483917e-02 113 1 - 22 -2.000000e+00 -2.000000e+00 1.549411e-02 118 1 - 23 -2.000000e+00 -2.000000e+00 1.614714e-02 123 1 - 24 -2.000000e+00 -2.000000e+00 1.680899e-02 128 1 - 25 -2.000000e+00 -2.000000e+00 1.752710e-02 133 1 - 26 -2.000000e+00 -2.000000e+00 1.819205e-02 138 1 - 27 -2.000000e+00 -2.000000e+00 1.885915e-02 143 1 - 28 -2.000000e+00 -2.000000e+00 1.955318e-02 148 1 - 29 -2.000000e+00 -2.000000e+00 2.023602e-02 153 1 - 30 -2.000000e+00 -2.000000e+00 2.092505e-02 158 1 - 31 -2.000000e+00 -2.000000e+00 2.165318e-02 163 1 - 32 -2.000000e+00 -2.000000e+00 2.235317e-02 168 1 - 33 -2.000000e+00 -2.000000e+00 2.304101e-02 173 1 - 34 -2.000000e+00 -2.000000e+00 2.375221e-02 178 1 - 35 -2.000000e+00 -2.000000e+00 2.445602e-02 183 1 - 36 -2.000000e+00 -2.000000e+00 2.516198e-02 188 1 - 37 -2.000000e+00 -2.000000e+00 2.591801e-02 193 1 - 38 -2.000000e+00 -2.000000e+00 2.665019e-02 198 1 - 39 -2.000000e+00 -2.000000e+00 2.737308e-02 203 1 - 40 -2.000000e+00 -2.000000e+00 2.810216e-02 208 1 + 1 0.000000e+00 -2.500000e+00 1.929045e-03 5 1 + 2 -1.500000e+00 -2.000000e+00 2.979040e-03 14 1 + 3 -1.000000e+00 -2.000000e+00 3.514051e-03 19 1 + 4 -1.000000e+00 -2.000000e+00 4.071951e-03 24 1 + 5 -2.000000e+00 -2.000000e+00 4.725933e-03 29 1 + 6 -2.000000e+00 -2.000000e+00 5.305052e-03 34 1 + 7 -2.000000e+00 -2.000000e+00 5.877018e-03 39 1 + 8 -2.000000e+00 -2.000000e+00 6.449938e-03 44 1 + 9 -2.000000e+00 -2.000000e+00 7.025957e-03 49 1 + 10 -2.000000e+00 -2.000000e+00 7.637024e-03 54 1 + 11 -2.000000e+00 -2.000000e+00 8.260965e-03 59 1 + 12 -2.000000e+00 -2.000000e+00 8.863926e-03 64 1 + 13 -2.000000e+00 -2.000000e+00 9.449005e-03 69 1 + 14 -2.000000e+00 -2.000000e+00 1.005197e-02 74 1 + 15 -2.000000e+00 -2.000000e+00 1.066208e-02 79 1 + 16 -2.000000e+00 -2.000000e+00 1.126194e-02 84 1 + 17 -2.000000e+00 -2.000000e+00 1.187205e-02 89 1 + 18 -2.000000e+00 -2.000000e+00 1.252007e-02 94 1 + 19 -2.000000e+00 -2.000000e+00 1.314998e-02 99 1 + 20 -2.000000e+00 -2.000000e+00 3.840995e-02 104 1 + 21 -2.000000e+00 -2.000000e+00 3.944802e-02 113 1 + 22 -2.000000e+00 -2.000000e+00 4.010296e-02 118 1 + 23 -2.000000e+00 -2.000000e+00 4.078293e-02 123 1 + 24 -2.000000e+00 -2.000000e+00 4.144502e-02 128 1 + 25 -2.000000e+00 -2.000000e+00 4.209805e-02 133 1 + 26 -2.000000e+00 -2.000000e+00 4.273701e-02 138 1 + 27 -2.000000e+00 -2.000000e+00 4.338098e-02 143 1 + 28 -2.000000e+00 -2.000000e+00 4.403901e-02 148 1 + 29 -2.000000e+00 -2.000000e+00 4.472804e-02 153 1 + 30 -2.000000e+00 -2.000000e+00 4.539704e-02 158 1 + 31 -2.000000e+00 -2.000000e+00 4.606104e-02 163 1 + 32 -2.000000e+00 -2.000000e+00 4.672503e-02 168 1 + 33 -2.000000e+00 -2.000000e+00 4.739499e-02 173 1 + 34 -2.000000e+00 -2.000000e+00 4.807401e-02 178 1 + 35 -2.000000e+00 -2.000000e+00 4.879308e-02 183 1 + 36 -2.000000e+00 -2.000000e+00 4.948497e-02 188 1 + 37 -2.000000e+00 -2.000000e+00 5.018306e-02 193 1 + 38 -2.000000e+00 -2.000000e+00 5.089307e-02 198 1 + 39 -2.000000e+00 -2.000000e+00 5.158591e-02 203 1 + 40 -2.000000e+00 -2.000000e+00 5.232406e-02 208 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.810216e-02 +total time (s) : 5.232406e-02 total solves : 208 best bound : -2.000000e+00 simulation ci : -1.887500e+00 ± 1.189300e-01 @@ -259,15 +259,15 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.390000e+02 6.304440e+01 1.043630e-01 183 1 - 32 3.987949e+01 2.346450e+02 1.105374e+00 7752 1 - 53 2.848993e+02 2.362264e+02 2.115026e+00 13899 1 - 72 6.195245e+02 2.363946e+02 3.185670e+00 19692 1 - 84 1.975040e+02 2.364251e+02 4.238926e+00 24504 1 - 100 1.135002e+02 2.364293e+02 4.719036e+00 26640 1 + 1 2.390000e+02 6.304440e+01 1.043160e-01 183 1 + 31 8.517170e+02 2.346450e+02 1.104898e+00 7701 1 + 53 2.848993e+02 2.362264e+02 2.144756e+00 13899 1 + 72 6.195245e+02 2.363946e+02 3.242283e+00 19692 1 + 83 1.925059e+02 2.364242e+02 4.302291e+00 24345 1 + 100 1.135002e+02 2.364293e+02 4.850230e+00 26640 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 4.719036e+00 +total time (s) : 4.850230e+00 total solves : 26640 best bound : 2.364293e+02 simulation ci : 2.593398e+02 ± 5.186931e+01 @@ -300,19 +300,19 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -3.878303e+00 -4.434982e+00 1.973391e-01 1400 1 - 20 -4.262885e+00 -4.399265e+00 3.153272e-01 2800 1 - 30 -3.075162e+00 -4.382527e+00 4.400902e-01 4200 1 - 40 -3.761147e+00 -4.369587e+00 5.725801e-01 5600 1 - 50 -4.323162e+00 -4.362199e+00 7.118852e-01 7000 1 - 60 -3.654943e+00 -4.358401e+00 8.551462e-01 8400 1 - 70 -4.010883e+00 -4.357368e+00 9.963672e-01 9800 1 - 80 -4.314412e+00 -4.355714e+00 1.143003e+00 11200 1 - 90 -4.542422e+00 -4.353708e+00 1.354980e+00 12600 1 - 100 -4.178952e+00 -4.351685e+00 1.504494e+00 14000 1 + 10 -3.878303e+00 -4.434982e+00 1.926260e-01 1400 1 + 20 -4.262885e+00 -4.399265e+00 3.126850e-01 2800 1 + 30 -3.075162e+00 -4.382527e+00 4.949551e-01 4200 1 + 40 -3.761147e+00 -4.369587e+00 6.276181e-01 5600 1 + 50 -4.323162e+00 -4.362199e+00 7.675850e-01 7000 1 + 60 -3.654943e+00 -4.358401e+00 9.108100e-01 8400 1 + 70 -4.010883e+00 -4.357368e+00 1.055288e+00 9800 1 + 80 -4.314412e+00 -4.355714e+00 1.204105e+00 11200 1 + 90 -4.542422e+00 -4.353708e+00 1.358413e+00 12600 1 + 100 -4.178952e+00 -4.351685e+00 1.507290e+00 14000 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 1.504494e+00 +total time (s) : 1.507290e+00 total solves : 14000 best bound : -4.351685e+00 simulation ci : -4.246786e+00 ± 8.703997e-02 @@ -344,16 +344,16 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -1.573154e+00 -1.474247e+00 6.890011e-02 1050 1 - 20 -1.346690e+00 -1.471483e+00 1.073132e-01 1600 1 - 30 -1.308031e+00 -1.471307e+00 1.897640e-01 2650 1 - 40 -1.401200e+00 -1.471167e+00 2.311590e-01 3200 1 - 50 -1.557483e+00 -1.471097e+00 3.173201e-01 4250 1 - 60 -1.534169e+00 -1.471075e+00 3.621981e-01 4800 1 - 65 -1.689864e+00 -1.471075e+00 3.846991e-01 5075 1 + 10 -1.573154e+00 -1.474247e+00 7.284117e-02 1050 1 + 20 -1.346690e+00 -1.471483e+00 1.128891e-01 1600 1 + 30 -1.308031e+00 -1.471307e+00 2.000482e-01 2650 1 + 40 -1.401200e+00 -1.471167e+00 2.435000e-01 3200 1 + 50 -1.557483e+00 -1.471097e+00 3.391612e-01 4250 1 + 60 -1.534169e+00 -1.471075e+00 3.864150e-01 4800 1 + 65 -1.689864e+00 -1.471075e+00 4.101441e-01 5075 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 3.846991e-01 +total time (s) : 4.101441e-01 total solves : 5075 best bound : -1.471075e+00 simulation ci : -1.484094e+00 ± 4.058993e-02 @@ -387,14 +387,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 3.455904e+05 3.147347e+05 8.348942e-03 54 1 - 20 3.336455e+05 3.402383e+05 1.453710e-02 104 1 - 30 3.337559e+05 3.403155e+05 2.199411e-02 158 1 - 40 3.337559e+05 3.403155e+05 2.912116e-02 208 1 - 48 3.337559e+05 3.403155e+05 3.532600e-02 248 1 + 10 3.455904e+05 3.147347e+05 8.333206e-03 54 1 + 20 3.336455e+05 3.402383e+05 1.453018e-02 104 1 + 30 3.337559e+05 3.403155e+05 2.197099e-02 158 1 + 40 3.337559e+05 3.403155e+05 2.923417e-02 208 1 + 48 3.337559e+05 3.403155e+05 3.553319e-02 248 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 3.532600e-02 +total time (s) : 3.553319e-02 total solves : 248 best bound : 3.403155e+05 simulation ci : 1.351676e+08 ± 1.785770e+08 @@ -429,14 +429,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 4.403329e+05 3.509666e+05 1.333809e-02 92 1 - 20 4.055335e+05 4.054833e+05 2.393723e-02 172 1 - 30 3.959476e+05 4.067125e+05 3.701711e-02 264 1 - 40 3.959476e+05 4.067125e+05 5.015016e-02 344 1 - 47 3.959476e+05 4.067125e+05 6.019616e-02 400 1 + 10 4.403329e+05 3.509666e+05 1.391387e-02 92 1 + 20 4.055335e+05 4.054833e+05 2.501893e-02 172 1 + 30 3.959476e+05 4.067125e+05 8.837080e-02 264 1 + 40 3.959476e+05 4.067125e+05 1.018989e-01 344 1 + 47 3.959476e+05 4.067125e+05 1.120598e-01 400 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.019616e-02 +total time (s) : 1.120598e-01 total solves : 400 best bound : 4.067125e+05 simulation ci : 2.695623e+07 ± 3.645336e+07 @@ -470,11 +470,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 8.316000e+03 0.000000e+00 9.881091e-02 14 1 - 40 4.716000e+03 4.074139e+03 2.245631e-01 776 1 + 1 8.316000e+03 0.000000e+00 9.385800e-02 14 1 + 40 4.716000e+03 4.074139e+03 2.245650e-01 776 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.245631e-01 +total time (s) : 2.245650e-01 total solves : 776 best bound : 4.074139e+03 simulation ci : 4.477341e+03 ± 6.593738e+02 @@ -507,11 +507,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1L 7.000000e+04 6.166667e+04 5.694509e-01 8 1 - 40L 5.500000e+04 6.250000e+04 8.037488e-01 344 1 + 1L 7.000000e+04 6.166667e+04 5.688460e-01 8 1 + 40L 5.500000e+04 6.250000e+04 8.131721e-01 344 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 8.037488e-01 +total time (s) : 8.131721e-01 total solves : 344 best bound : 6.250000e+04 simulation ci : 6.091250e+04 ± 6.325667e+03 @@ -544,11 +544,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.000000e+04 6.250000e+04 3.943920e-03 8 1 - 20 4.000000e+04 6.250000e+04 4.461193e-02 172 1 + 1 3.000000e+04 6.250000e+04 3.880978e-03 8 1 + 20 4.000000e+04 6.250000e+04 4.516983e-02 172 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 4.461193e-02 +total time (s) : 4.516983e-02 total solves : 172 best bound : 6.250000e+04 simulation ci : 5.650000e+04 ± 6.785916e+03 @@ -580,11 +580,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 7.000000e+04 6.250000e+04 5.640984e-03 5 1 - 10 6.000000e+04 6.250000e+04 2.029705e-02 50 1 + 1 7.000000e+04 6.250000e+04 5.445957e-03 5 1 + 10 6.000000e+04 6.250000e+04 2.079988e-02 50 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 2.029705e-02 +total time (s) : 2.079988e-02 total solves : 50 best bound : 6.250000e+04 simulation ci : 6.150000e+04 ± 1.265596e+04 @@ -617,11 +617,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1L 6.000000e+00 9.000000e+00 3.937697e-02 6 1 - 20L 9.000000e+00 9.000000e+00 8.010602e-02 123 1 + 1L 6.000000e+00 9.000000e+00 3.900313e-02 6 1 + 20L 9.000000e+00 9.000000e+00 7.972097e-02 123 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 8.010602e-02 +total time (s) : 7.972097e-02 total solves : 123 best bound : 9.000000e+00 simulation ci : 8.850000e+00 ± 2.940000e-01 @@ -653,17 +653,17 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 5 -1.620000e+00 -8.522173e-01 1.323819e-02 87 1 - 10 -1.847411e-13 1.392784e+00 1.999807e-02 142 1 - 15 -6.963319e-13 1.514085e+00 2.767110e-02 197 1 - 20 1.136868e-13 1.514085e+00 3.539515e-02 252 1 - 25 -1.080025e-12 1.514085e+00 9.109902e-02 339 1 - 30 1.136868e-13 1.514085e+00 9.960604e-02 394 1 - 35 -2.479988e+01 1.514085e+00 1.085091e-01 449 1 - 40 1.136868e-13 1.514085e+00 1.179650e-01 504 1 + 5 -1.620000e+00 -8.522173e-01 1.324701e-02 87 1 + 10 -1.847411e-13 1.392784e+00 2.023602e-02 142 1 + 15 -6.963319e-13 1.514085e+00 2.795506e-02 197 1 + 20 1.136868e-13 1.514085e+00 3.572106e-02 252 1 + 25 -1.080025e-12 1.514085e+00 9.403610e-02 339 1 + 30 1.136868e-13 1.514085e+00 1.026909e-01 394 1 + 35 -2.479988e+01 1.514085e+00 1.117070e-01 449 1 + 40 1.136868e-13 1.514085e+00 1.212301e-01 504 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.179650e-01 +total time (s) : 1.212301e-01 total solves : 504 best bound : 1.514085e+00 simulation ci : 3.429060e+00 ± 6.665883e+00 @@ -695,14 +695,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 4.375000e+00 6.798204e+00 1.507630e-01 278 1 - 20 1.230957e+01 1.358825e+00 1.964672e-01 428 1 - 30 8.859026e+00 1.278410e+00 2.287230e-01 706 1 - 40 -2.315795e+01 1.278410e+00 2.517691e-01 856 1 - 49 3.014193e+01 1.278410e+00 2.728870e-01 991 1 + 10 4.375000e+00 6.798204e+00 1.525280e-01 278 1 + 20 1.230957e+01 1.358825e+00 1.701591e-01 428 1 + 30 8.859026e+00 1.278410e+00 2.023129e-01 706 1 + 40 -2.315795e+01 1.278410e+00 2.255621e-01 856 1 + 49 3.014193e+01 1.278410e+00 2.473340e-01 991 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.728870e-01 +total time (s) : 2.473340e-01 total solves : 991 best bound : 1.278410e+00 simulation ci : -1.755629e+00 ± 5.526921e+00 @@ -734,13 +734,13 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -2.454000e+00 1.647363e+00 3.616905e-02 278 1 - 20 -3.575118e+00 1.278410e+00 6.460500e-02 428 1 - 30 -5.003795e+01 1.278410e+00 1.107509e-01 706 1 - 40 6.835609e+00 1.278410e+00 1.499109e-01 856 1 + 10 -2.454000e+00 1.647363e+00 3.704500e-02 278 1 + 20 -3.575118e+00 1.278410e+00 6.618023e-02 428 1 + 30 -5.003795e+01 1.278410e+00 1.144490e-01 706 1 + 40 6.835609e+00 1.278410e+00 1.569302e-01 856 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.499109e-01 +total time (s) : 1.569302e-01 total solves : 856 best bound : 1.278410e+00 simulation ci : 4.369345e+00 ± 4.780393e+00 @@ -774,19 +774,19 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 4.787277e+00 9.346930e+00 1.398145e+00 900 1 - 20 6.374753e+00 1.361934e+01 1.562721e+00 1720 1 - 30 2.813321e+01 1.651297e+01 1.909971e+00 3036 1 - 40 1.654759e+01 1.632970e+01 2.247956e+00 4192 1 - 50 3.570941e+00 1.846889e+01 2.498539e+00 5020 1 - 60 1.087425e+01 1.890254e+01 2.778576e+00 5808 1 - 70 9.381610e+00 1.940320e+01 3.060148e+00 6540 1 - 80 5.648731e+01 1.962435e+01 3.272938e+00 7088 1 - 90 3.879273e+01 1.981008e+01 3.757243e+00 8180 1 - 100 7.870187e+00 1.997117e+01 3.973397e+00 8664 1 + 10 4.787277e+00 9.346930e+00 1.394588e+00 900 1 + 20 6.374753e+00 1.361934e+01 1.604891e+00 1720 1 + 30 2.813321e+01 1.651297e+01 1.935454e+00 3036 1 + 40 1.654759e+01 1.632970e+01 2.307615e+00 4192 1 + 50 3.570941e+00 1.846889e+01 2.575974e+00 5020 1 + 60 1.087425e+01 1.890254e+01 2.870635e+00 5808 1 + 70 9.381610e+00 1.940320e+01 3.166286e+00 6540 1 + 80 5.648731e+01 1.962435e+01 3.395494e+00 7088 1 + 90 3.879273e+01 1.981008e+01 3.906569e+00 8180 1 + 100 7.870187e+00 1.997117e+01 4.144914e+00 8664 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.973397e+00 +total time (s) : 4.144914e+00 total solves : 8664 best bound : 1.997117e+01 simulation ci : 2.275399e+01 ± 4.541987e+00 @@ -821,17 +821,17 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 5 9.000000e+00 9.002950e+00 1.338980e-01 235 1 - 10 4.000000e+00 9.002950e+00 1.541469e-01 310 1 - 15 4.000000e+00 9.002950e+00 1.752379e-01 385 1 - 20 4.000000e+00 9.002950e+00 1.963398e-01 460 1 - 25 1.000000e+01 9.002950e+00 2.681859e-01 695 1 - 30 5.000000e+00 9.002950e+00 2.903080e-01 770 1 - 35 1.000000e+01 9.002950e+00 3.136590e-01 845 1 - 40 5.000000e+00 9.002950e+00 3.380489e-01 920 1 + 5 9.000000e+00 9.002950e+00 1.374650e-01 235 1 + 10 4.000000e+00 9.002950e+00 1.580110e-01 310 1 + 15 4.000000e+00 9.002950e+00 1.792669e-01 385 1 + 20 4.000000e+00 9.002950e+00 2.009630e-01 460 1 + 25 1.000000e+01 9.002950e+00 2.763119e-01 695 1 + 30 5.000000e+00 9.002950e+00 2.991009e-01 770 1 + 35 1.000000e+01 9.002950e+00 3.233829e-01 845 1 + 40 5.000000e+00 9.002950e+00 3.481488e-01 920 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 3.380489e-01 +total time (s) : 3.481488e-01 total solves : 920 best bound : 9.002950e+00 simulation ci : 6.375000e+00 ± 7.930178e-01 @@ -866,15 +866,15 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 1.000000e+01 6.868919e+00 1.039782e-01 510 1 - 20 2.000000e+00 6.834387e+00 1.557140e-01 720 1 - 30 1.200000e+01 6.834387e+00 2.970202e-01 1230 1 - 40 7.000000e+00 6.823805e+00 3.494332e-01 1440 1 - 50 7.000000e+00 6.823805e+00 4.930391e-01 1950 1 - 60 5.000000e+00 6.823805e+00 5.463500e-01 2160 1 + 10 1.000000e+01 6.868919e+00 1.100612e-01 510 1 + 20 2.000000e+00 6.834387e+00 1.631992e-01 720 1 + 30 1.200000e+01 6.834387e+00 3.063211e-01 1230 1 + 40 7.000000e+00 6.823805e+00 3.595791e-01 1440 1 + 50 7.000000e+00 6.823805e+00 5.071142e-01 1950 1 + 60 5.000000e+00 6.823805e+00 5.619841e-01 2160 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 5.463500e-01 +total time (s) : 5.619841e-01 total solves : 2160 best bound : 6.823805e+00 simulation ci : 6.183333e+00 ± 6.258900e-01 @@ -908,15 +908,15 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 2.549668e+06 2.078257e+06 5.043571e-01 920 1 - 20 5.494568e+05 2.078257e+06 6.959481e-01 1340 1 - 30 4.985879e+04 2.078257e+06 1.225232e+00 2260 1 - 40 3.799447e+06 2.078257e+06 1.424117e+00 2680 1 - 50 1.049867e+06 2.078257e+06 1.979467e+00 3600 1 - 60 3.985191e+04 2.078257e+06 2.177041e+00 4020 1 + 10 2.549668e+06 2.078257e+06 5.306101e-01 920 1 + 20 5.494568e+05 2.078257e+06 7.298350e-01 1340 1 + 30 4.985879e+04 2.078257e+06 1.274781e+00 2260 1 + 40 3.799447e+06 2.078257e+06 1.478624e+00 2680 1 + 50 1.049867e+06 2.078257e+06 2.046983e+00 3600 1 + 60 3.985191e+04 2.078257e+06 2.251220e+00 4020 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.177041e+00 +total time (s) : 2.251220e+00 total solves : 4020 best bound : 2.078257e+06 simulation ci : 2.031697e+06 ± 3.922745e+05 @@ -950,15 +950,15 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10L 4.986663e+04 2.079119e+06 9.385839e-01 920 1 - 20L 3.799878e+06 2.079330e+06 1.656679e+00 1340 1 - 30L 3.003923e+04 2.079457e+06 2.762526e+00 2260 1 - 40L 5.549882e+06 2.079457e+06 3.560574e+00 2680 1 - 50L 2.799466e+06 2.079457e+06 4.713833e+00 3600 1 - 60L 3.549880e+06 2.079457e+06 5.473797e+00 4020 1 + 10L 4.986663e+04 2.079119e+06 9.832032e-01 920 1 + 20L 3.799878e+06 2.079330e+06 1.716709e+00 1340 1 + 30L 3.003923e+04 2.079457e+06 2.874528e+00 2260 1 + 40L 5.549882e+06 2.079457e+06 3.697897e+00 2680 1 + 50L 2.799466e+06 2.079457e+06 4.925736e+00 3600 1 + 60L 3.549880e+06 2.079457e+06 5.718980e+00 4020 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 5.473797e+00 +total time (s) : 5.718980e+00 total solves : 4020 best bound : 2.079457e+06 simulation ci : 2.352204e+06 ± 5.377531e+05 @@ -990,13 +990,13 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 100 2.500000e+01 1.188965e+02 7.742610e-01 1946 1 - 200 2.500000e+01 1.191634e+02 9.744711e-01 3920 1 - 300 0.000000e+00 1.191666e+02 1.181034e+00 5902 1 - 330 2.500000e+01 1.191667e+02 1.221981e+00 6224 1 + 100 2.500000e+01 1.188965e+02 7.883129e-01 1946 1 + 200 2.500000e+01 1.191634e+02 1.003221e+00 3920 1 + 300 0.000000e+00 1.191666e+02 1.222479e+00 5902 1 + 330 2.500000e+01 1.191667e+02 1.265766e+00 6224 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.221981e+00 +total time (s) : 1.265766e+00 total solves : 6224 best bound : 1.191667e+02 simulation ci : 2.158333e+01 ± 3.290252e+00 @@ -1028,12 +1028,12 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 100 0.000000e+00 1.191285e+02 3.271759e-01 2874 1 - 200 2.500000e+00 1.191666e+02 5.546119e-01 4855 1 - 282 7.500000e+00 1.191667e+02 6.863480e-01 5733 1 + 100 0.000000e+00 1.191285e+02 2.961462e-01 2874 1 + 200 2.500000e+00 1.191666e+02 5.767140e-01 4855 1 + 282 7.500000e+00 1.191667e+02 7.111061e-01 5733 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.863480e-01 +total time (s) : 7.111061e-01 total solves : 5733 best bound : 1.191667e+02 simulation ci : 2.104610e+01 ± 3.492245e+00 @@ -1064,13 +1064,13 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 4.000000e+00 1.997089e+01 6.829882e-02 1204 1 - 20 8.000000e+00 2.000000e+01 8.908892e-02 1420 1 - 30 1.600000e+01 2.000000e+01 1.558468e-01 2628 1 - 40 8.000000e+00 2.000000e+01 1.774418e-01 2834 1 + 10 4.000000e+00 1.997089e+01 6.984305e-02 1204 1 + 20 8.000000e+00 2.000000e+01 9.086013e-02 1420 1 + 30 1.600000e+01 2.000000e+01 1.610591e-01 2628 1 + 40 8.000000e+00 2.000000e+01 1.829062e-01 2834 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.774418e-01 +total time (s) : 1.829062e-01 total solves : 2834 best bound : 2.000000e+01 simulation ci : 1.625000e+01 ± 4.766381e+00 @@ -1101,11 +1101,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.000000e+00 1.500000e+00 1.557112e-03 3 1 - 40 4.000000e+00 2.000000e+00 4.292202e-02 578 1 + 1 1.000000e+00 1.500000e+00 1.590967e-03 3 1 + 40 4.000000e+00 2.000000e+00 4.373312e-02 578 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 4.292202e-02 +total time (s) : 4.373312e-02 total solves : 578 best bound : 2.000000e+00 simulation ci : 1.950000e+00 ± 5.568095e-01 @@ -1138,138 +1138,137 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 5.250000e+00 4.888859e+00 1.663401e-01 1350 1 - 20 4.350000e+00 4.105855e+00 2.500541e-01 2700 1 - 30 5.000000e+00 4.100490e+00 3.432181e-01 4050 1 - 40 3.500000e+00 4.097376e+00 4.441550e-01 5400 1 - 50 5.250000e+00 4.095859e+00 5.509920e-01 6750 1 - 60 3.643750e+00 4.093342e+00 6.686151e-01 8100 1 - 70 2.643750e+00 4.091818e+00 7.786582e-01 9450 1 - 80 5.087500e+00 4.091591e+00 8.907940e-01 10800 1 - 90 5.062500e+00 4.091309e+00 1.003956e+00 12150 1 - 100 4.843750e+00 4.087004e+00 1.126466e+00 13500 1 - 110 3.437500e+00 4.086094e+00 1.248292e+00 14850 1 - 120 3.375000e+00 4.085926e+00 1.371468e+00 16200 1 - 130 5.025000e+00 4.085866e+00 1.496897e+00 17550 1 - 140 5.000000e+00 4.085734e+00 1.623436e+00 18900 1 - 150 3.500000e+00 4.085655e+00 1.751765e+00 20250 1 - 160 4.281250e+00 4.085454e+00 1.876908e+00 21600 1 - 170 4.562500e+00 4.085425e+00 2.003059e+00 22950 1 - 180 5.768750e+00 4.085425e+00 2.164016e+00 24300 1 - 190 3.468750e+00 4.085359e+00 2.297447e+00 25650 1 - 200 4.131250e+00 4.085225e+00 2.429615e+00 27000 1 - 210 4.512500e+00 4.085157e+00 2.560197e+00 28350 1 - 220 4.900000e+00 4.085153e+00 2.693851e+00 29700 1 - 230 4.025000e+00 4.085134e+00 2.832126e+00 31050 1 - 240 4.468750e+00 4.085116e+00 2.971372e+00 32400 1 - 250 4.062500e+00 4.085075e+00 3.107905e+00 33750 1 - 260 4.875000e+00 4.085037e+00 3.249029e+00 35100 1 - 270 3.850000e+00 4.085011e+00 3.388852e+00 36450 1 - 280 4.912500e+00 4.084992e+00 3.530865e+00 37800 1 - 290 2.987500e+00 4.084986e+00 3.687086e+00 39150 1 - 300 3.825000e+00 4.084957e+00 3.834561e+00 40500 1 - 310 3.250000e+00 4.084911e+00 3.980833e+00 41850 1 - 320 3.600000e+00 4.084896e+00 4.126528e+00 43200 1 - 330 3.925000e+00 4.084896e+00 4.261477e+00 44550 1 - 340 4.500000e+00 4.084893e+00 4.405026e+00 45900 1 - 350 5.000000e+00 4.084891e+00 4.548161e+00 47250 1 - 360 3.075000e+00 4.084866e+00 4.690575e+00 48600 1 - 370 3.500000e+00 4.084861e+00 4.842601e+00 49950 1 - 380 3.356250e+00 4.084857e+00 4.991565e+00 51300 1 - 390 5.500000e+00 4.084846e+00 5.183973e+00 52650 1 - 400 4.475000e+00 4.084846e+00 5.330938e+00 54000 1 - 410 3.750000e+00 4.084843e+00 5.479866e+00 55350 1 - 420 3.687500e+00 4.084843e+00 5.632725e+00 56700 1 - 430 4.337500e+00 4.084825e+00 5.797897e+00 58050 1 - 440 5.750000e+00 4.084825e+00 5.937596e+00 59400 1 - 450 4.925000e+00 4.084792e+00 6.095874e+00 60750 1 - 460 3.600000e+00 4.084792e+00 6.253533e+00 62100 1 - 470 4.387500e+00 4.084792e+00 6.403327e+00 63450 1 - 480 4.000000e+00 4.084792e+00 6.562336e+00 64800 1 - 490 2.975000e+00 4.084788e+00 6.715832e+00 66150 1 - 500 3.125000e+00 4.084788e+00 6.883962e+00 67500 1 - 510 4.250000e+00 4.084788e+00 7.044633e+00 68850 1 - 520 4.512500e+00 4.084786e+00 7.196681e+00 70200 1 - 530 3.875000e+00 4.084786e+00 7.356925e+00 71550 1 - 540 4.387500e+00 4.084781e+00 7.516770e+00 72900 1 - 550 5.281250e+00 4.084780e+00 7.680792e+00 74250 1 - 560 4.650000e+00 4.084780e+00 7.839925e+00 75600 1 - 570 3.062500e+00 4.084780e+00 7.995148e+00 76950 1 - 580 3.187500e+00 4.084780e+00 8.171869e+00 78300 1 - 590 3.812500e+00 4.084780e+00 8.322004e+00 79650 1 - 600 3.637500e+00 4.084774e+00 8.480089e+00 81000 1 - 610 3.950000e+00 4.084765e+00 8.636525e+00 82350 1 - 620 4.625000e+00 4.084760e+00 8.796984e+00 83700 1 - 630 4.218750e+00 4.084760e+00 8.960174e+00 85050 1 - 640 3.025000e+00 4.084755e+00 9.120951e+00 86400 1 - 650 2.993750e+00 4.084751e+00 9.272855e+00 87750 1 - 660 3.262500e+00 4.084746e+00 9.430292e+00 89100 1 - 670 3.625000e+00 4.084746e+00 9.590352e+00 90450 1 - 680 2.981250e+00 4.084746e+00 9.752024e+00 91800 1 - 690 4.187500e+00 4.084746e+00 9.920906e+00 93150 1 - 700 4.500000e+00 4.084746e+00 1.007550e+01 94500 1 - 710 3.225000e+00 4.084746e+00 1.023228e+01 95850 1 - 720 4.375000e+00 4.084746e+00 1.039064e+01 97200 1 - 730 2.650000e+00 4.084746e+00 1.055457e+01 98550 1 - 740 3.250000e+00 4.084746e+00 1.071168e+01 99900 1 - 750 4.725000e+00 4.084746e+00 1.091026e+01 101250 1 - 760 3.375000e+00 4.084746e+00 1.108029e+01 102600 1 - 770 5.375000e+00 4.084746e+00 1.124661e+01 103950 1 - 780 4.068750e+00 4.084746e+00 1.141610e+01 105300 1 - 790 4.412500e+00 4.084746e+00 1.158874e+01 106650 1 - 800 4.350000e+00 4.084746e+00 1.175775e+01 108000 1 - 810 5.887500e+00 4.084746e+00 1.193063e+01 109350 1 - 820 4.912500e+00 4.084746e+00 1.209860e+01 110700 1 - 830 4.387500e+00 4.084746e+00 1.226029e+01 112050 1 - 840 3.675000e+00 4.084746e+00 1.245525e+01 113400 1 - 850 5.375000e+00 4.084746e+00 1.262465e+01 114750 1 - 860 3.562500e+00 4.084746e+00 1.280133e+01 116100 1 - 870 3.075000e+00 4.084746e+00 1.298335e+01 117450 1 - 880 3.625000e+00 4.084746e+00 1.315501e+01 118800 1 - 890 2.937500e+00 4.084746e+00 1.332374e+01 120150 1 - 900 4.450000e+00 4.084746e+00 1.352272e+01 121500 1 - 910 4.200000e+00 4.084746e+00 1.369764e+01 122850 1 - 920 3.687500e+00 4.084746e+00 1.387853e+01 124200 1 - 930 4.725000e+00 4.084746e+00 1.406010e+01 125550 1 - 940 4.018750e+00 4.084746e+00 1.423773e+01 126900 1 - 950 4.675000e+00 4.084746e+00 1.440701e+01 128250 1 - 960 3.375000e+00 4.084746e+00 1.457836e+01 129600 1 - 970 3.812500e+00 4.084746e+00 1.474899e+01 130950 1 - 980 3.112500e+00 4.084746e+00 1.492253e+01 132300 1 - 990 3.600000e+00 4.084746e+00 1.509909e+01 133650 1 - 1000 5.500000e+00 4.084746e+00 1.527622e+01 135000 1 - 1010 3.187500e+00 4.084746e+00 1.544734e+01 136350 1 - 1020 4.900000e+00 4.084746e+00 1.562007e+01 137700 1 - 1030 3.637500e+00 4.084746e+00 1.582686e+01 139050 1 - 1040 3.975000e+00 4.084746e+00 1.600489e+01 140400 1 - 1050 4.750000e+00 4.084746e+00 1.618961e+01 141750 1 - 1060 4.437500e+00 4.084746e+00 1.638499e+01 143100 1 - 1070 5.000000e+00 4.084746e+00 1.656761e+01 144450 1 - 1080 4.143750e+00 4.084746e+00 1.675360e+01 145800 1 - 1090 5.625000e+00 4.084746e+00 1.693228e+01 147150 1 - 1100 3.475000e+00 4.084746e+00 1.711901e+01 148500 1 - 1110 4.156250e+00 4.084746e+00 1.730887e+01 149850 1 - 1120 4.450000e+00 4.084746e+00 1.749134e+01 151200 1 - 1130 3.312500e+00 4.084741e+00 1.767779e+01 152550 1 - 1140 5.375000e+00 4.084741e+00 1.785472e+01 153900 1 - 1150 4.800000e+00 4.084737e+00 1.806527e+01 155250 1 - 1160 3.300000e+00 4.084737e+00 1.825366e+01 156600 1 - 1170 4.356250e+00 4.084737e+00 1.843901e+01 157950 1 - 1180 3.900000e+00 4.084737e+00 1.862842e+01 159300 1 - 1190 4.450000e+00 4.084737e+00 1.882290e+01 160650 1 - 1200 5.156250e+00 4.084737e+00 1.901250e+01 162000 1 - 1210 4.500000e+00 4.084737e+00 1.919030e+01 163350 1 - 1220 4.875000e+00 4.084737e+00 1.938506e+01 164700 1 - 1230 4.000000e+00 4.084737e+00 1.956429e+01 166050 1 - 1240 4.062500e+00 4.084737e+00 1.975550e+01 167400 1 - 1250 5.450000e+00 4.084737e+00 1.995034e+01 168750 1 - 1252 4.650000e+00 4.084737e+00 2.000588e+01 169020 1 + 10 5.250000e+00 4.888859e+00 1.704819e-01 1350 1 + 20 4.350000e+00 4.105855e+00 2.557840e-01 2700 1 + 30 5.000000e+00 4.100490e+00 3.514409e-01 4050 1 + 40 3.500000e+00 4.097376e+00 4.545798e-01 5400 1 + 50 5.250000e+00 4.095859e+00 5.626230e-01 6750 1 + 60 3.643750e+00 4.093342e+00 6.754730e-01 8100 1 + 70 2.643750e+00 4.091818e+00 7.879639e-01 9450 1 + 80 5.087500e+00 4.091591e+00 9.042399e-01 10800 1 + 90 5.062500e+00 4.091309e+00 1.019908e+00 12150 1 + 100 4.843750e+00 4.087004e+00 1.144455e+00 13500 1 + 110 3.437500e+00 4.086094e+00 1.268943e+00 14850 1 + 120 3.375000e+00 4.085926e+00 1.394307e+00 16200 1 + 130 5.025000e+00 4.085866e+00 1.521941e+00 17550 1 + 140 5.000000e+00 4.085734e+00 1.649412e+00 18900 1 + 150 3.500000e+00 4.085655e+00 1.778080e+00 20250 1 + 160 4.281250e+00 4.085454e+00 1.904933e+00 21600 1 + 170 4.562500e+00 4.085425e+00 2.033533e+00 22950 1 + 180 5.768750e+00 4.085425e+00 2.163414e+00 24300 1 + 190 3.468750e+00 4.085359e+00 2.299521e+00 25650 1 + 200 4.131250e+00 4.085225e+00 2.433752e+00 27000 1 + 210 4.512500e+00 4.085157e+00 2.604127e+00 28350 1 + 220 4.900000e+00 4.085153e+00 2.737455e+00 29700 1 + 230 4.025000e+00 4.085134e+00 2.875680e+00 31050 1 + 240 4.468750e+00 4.085116e+00 3.015667e+00 32400 1 + 250 4.062500e+00 4.085075e+00 3.153744e+00 33750 1 + 260 4.875000e+00 4.085037e+00 3.294495e+00 35100 1 + 270 3.850000e+00 4.085011e+00 3.434320e+00 36450 1 + 280 4.912500e+00 4.084992e+00 3.576204e+00 37800 1 + 290 2.987500e+00 4.084986e+00 3.725002e+00 39150 1 + 300 3.825000e+00 4.084957e+00 3.877516e+00 40500 1 + 310 3.250000e+00 4.084911e+00 4.027672e+00 41850 1 + 320 3.600000e+00 4.084896e+00 4.174708e+00 43200 1 + 330 3.925000e+00 4.084896e+00 4.311967e+00 44550 1 + 340 4.500000e+00 4.084893e+00 4.458920e+00 45900 1 + 350 5.000000e+00 4.084891e+00 4.605219e+00 47250 1 + 360 3.075000e+00 4.084866e+00 4.750036e+00 48600 1 + 370 3.500000e+00 4.084861e+00 4.902742e+00 49950 1 + 380 3.356250e+00 4.084857e+00 5.058502e+00 51300 1 + 390 5.500000e+00 4.084846e+00 5.217160e+00 52650 1 + 400 4.475000e+00 4.084846e+00 5.367141e+00 54000 1 + 410 3.750000e+00 4.084843e+00 5.518252e+00 55350 1 + 420 3.687500e+00 4.084843e+00 5.674711e+00 56700 1 + 430 4.337500e+00 4.084825e+00 5.869491e+00 58050 1 + 440 5.750000e+00 4.084825e+00 6.013694e+00 59400 1 + 450 4.925000e+00 4.084792e+00 6.175355e+00 60750 1 + 460 3.600000e+00 4.084792e+00 6.332937e+00 62100 1 + 470 4.387500e+00 4.084792e+00 6.485656e+00 63450 1 + 480 4.000000e+00 4.084792e+00 6.648800e+00 64800 1 + 490 2.975000e+00 4.084788e+00 6.804068e+00 66150 1 + 500 3.125000e+00 4.084788e+00 6.960602e+00 67500 1 + 510 4.250000e+00 4.084788e+00 7.128119e+00 68850 1 + 520 4.512500e+00 4.084786e+00 7.283815e+00 70200 1 + 530 3.875000e+00 4.084786e+00 7.448404e+00 71550 1 + 540 4.387500e+00 4.084781e+00 7.613383e+00 72900 1 + 550 5.281250e+00 4.084780e+00 7.778834e+00 74250 1 + 560 4.650000e+00 4.084780e+00 7.934796e+00 75600 1 + 570 3.062500e+00 4.084780e+00 8.092858e+00 76950 1 + 580 3.187500e+00 4.084780e+00 8.245184e+00 78300 1 + 590 3.812500e+00 4.084780e+00 8.395426e+00 79650 1 + 600 3.637500e+00 4.084774e+00 8.555107e+00 81000 1 + 610 3.950000e+00 4.084765e+00 8.712438e+00 82350 1 + 620 4.625000e+00 4.084760e+00 8.871296e+00 83700 1 + 630 4.218750e+00 4.084760e+00 9.063928e+00 85050 1 + 640 3.025000e+00 4.084755e+00 9.227391e+00 86400 1 + 650 2.993750e+00 4.084751e+00 9.381393e+00 87750 1 + 660 3.262500e+00 4.084746e+00 9.541476e+00 89100 1 + 670 3.625000e+00 4.084746e+00 9.705555e+00 90450 1 + 680 2.981250e+00 4.084746e+00 9.870671e+00 91800 1 + 690 4.187500e+00 4.084746e+00 1.003358e+01 93150 1 + 700 4.500000e+00 4.084746e+00 1.019379e+01 94500 1 + 710 3.225000e+00 4.084746e+00 1.035506e+01 95850 1 + 720 4.375000e+00 4.084746e+00 1.051891e+01 97200 1 + 730 2.650000e+00 4.084746e+00 1.068753e+01 98550 1 + 740 3.250000e+00 4.084746e+00 1.085118e+01 99900 1 + 750 4.725000e+00 4.084746e+00 1.102475e+01 101250 1 + 760 3.375000e+00 4.084746e+00 1.119860e+01 102600 1 + 770 5.375000e+00 4.084746e+00 1.136600e+01 103950 1 + 780 4.068750e+00 4.084746e+00 1.153917e+01 105300 1 + 790 4.412500e+00 4.084746e+00 1.171766e+01 106650 1 + 800 4.350000e+00 4.084746e+00 1.189214e+01 108000 1 + 810 5.887500e+00 4.084746e+00 1.206906e+01 109350 1 + 820 4.912500e+00 4.084746e+00 1.226712e+01 110700 1 + 830 4.387500e+00 4.084746e+00 1.243011e+01 112050 1 + 840 3.675000e+00 4.084746e+00 1.260213e+01 113400 1 + 850 5.375000e+00 4.084746e+00 1.276808e+01 114750 1 + 860 3.562500e+00 4.084746e+00 1.294599e+01 116100 1 + 870 3.075000e+00 4.084746e+00 1.312371e+01 117450 1 + 880 3.625000e+00 4.084746e+00 1.329775e+01 118800 1 + 890 2.937500e+00 4.084746e+00 1.346379e+01 120150 1 + 900 4.450000e+00 4.084746e+00 1.363870e+01 121500 1 + 910 4.200000e+00 4.084746e+00 1.381329e+01 122850 1 + 920 3.687500e+00 4.084746e+00 1.399519e+01 124200 1 + 930 4.725000e+00 4.084746e+00 1.417308e+01 125550 1 + 940 4.018750e+00 4.084746e+00 1.435487e+01 126900 1 + 950 4.675000e+00 4.084746e+00 1.452249e+01 128250 1 + 960 3.375000e+00 4.084746e+00 1.468833e+01 129600 1 + 970 3.812500e+00 4.084746e+00 1.485362e+01 130950 1 + 980 3.112500e+00 4.084746e+00 1.504885e+01 132300 1 + 990 3.600000e+00 4.084746e+00 1.522341e+01 133650 1 + 1000 5.500000e+00 4.084746e+00 1.540312e+01 135000 1 + 1010 3.187500e+00 4.084746e+00 1.557377e+01 136350 1 + 1020 4.900000e+00 4.084746e+00 1.574687e+01 137700 1 + 1030 3.637500e+00 4.084746e+00 1.593309e+01 139050 1 + 1040 3.975000e+00 4.084746e+00 1.611098e+01 140400 1 + 1050 4.750000e+00 4.084746e+00 1.629219e+01 141750 1 + 1060 4.437500e+00 4.084746e+00 1.648865e+01 143100 1 + 1070 5.000000e+00 4.084746e+00 1.667046e+01 144450 1 + 1080 4.143750e+00 4.084746e+00 1.685576e+01 145800 1 + 1090 5.625000e+00 4.084746e+00 1.703189e+01 147150 1 + 1100 3.475000e+00 4.084746e+00 1.721501e+01 148500 1 + 1110 4.156250e+00 4.084746e+00 1.742786e+01 149850 1 + 1120 4.450000e+00 4.084746e+00 1.761171e+01 151200 1 + 1130 3.312500e+00 4.084741e+00 1.779781e+01 152550 1 + 1140 5.375000e+00 4.084741e+00 1.797506e+01 153900 1 + 1150 4.800000e+00 4.084737e+00 1.816666e+01 155250 1 + 1160 3.300000e+00 4.084737e+00 1.834990e+01 156600 1 + 1170 4.356250e+00 4.084737e+00 1.853366e+01 157950 1 + 1180 3.900000e+00 4.084737e+00 1.871973e+01 159300 1 + 1190 4.450000e+00 4.084737e+00 1.890654e+01 160650 1 + 1200 5.156250e+00 4.084737e+00 1.910280e+01 162000 1 + 1210 4.500000e+00 4.084737e+00 1.928728e+01 163350 1 + 1220 4.875000e+00 4.084737e+00 1.949882e+01 164700 1 + 1230 4.000000e+00 4.084737e+00 1.970362e+01 166050 1 + 1240 4.062500e+00 4.084737e+00 1.989043e+01 167400 1 + 1246 3.000000e+00 4.084737e+00 2.000524e+01 168210 1 ------------------------------------------------------------------- status : time_limit -total time (s) : 2.000588e+01 -total solves : 169020 +total time (s) : 2.000524e+01 +total solves : 168210 best bound : 4.084737e+00 -simulation ci : 4.071058e+00 ± 4.034930e-02 +simulation ci : 4.071445e+00 ± 4.036229e-02 numeric issues : 0 ------------------------------------------------------------------- @@ -1299,29 +1298,29 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 5.025000e+00 4.357902e+00 1.914341e-01 1350 1 - 20 4.250000e+00 4.340926e+00 5.634692e-01 2700 1 - 30 4.312500e+00 4.043498e+00 1.075576e+00 4050 1 - 40 4.525000e+00 4.041138e+00 1.722561e+00 5400 1 - 50 3.687500e+00 4.040451e+00 2.466196e+00 6750 1 - 60 2.987500e+00 4.040209e+00 3.322262e+00 8100 1 - 70 3.225000e+00 4.039112e+00 4.313395e+00 9450 1 - 80 4.500000e+00 4.039113e+00 5.376891e+00 10800 1 - 90 5.750000e+00 4.039007e+00 6.567904e+00 12150 1 - 100 3.700000e+00 4.038888e+00 7.886618e+00 13500 1 - 110 3.800000e+00 4.038857e+00 9.274528e+00 14850 1 - 120 2.687500e+00 4.038826e+00 1.073563e+01 16200 1 - 130 4.737500e+00 4.038815e+00 1.243337e+01 17550 1 - 140 4.550000e+00 4.038782e+00 1.423981e+01 18900 1 - 150 3.250000e+00 4.038775e+00 1.602451e+01 20250 1 - 160 3.062500e+00 4.038770e+00 1.800927e+01 21600 1 - 170 3.750000e+00 4.037586e+00 2.003571e+01 22950 1 + 10 4.512500e+00 4.066874e+00 1.980422e-01 1350 1 + 20 5.062500e+00 4.040569e+00 5.403211e-01 2700 1 + 30 4.968750e+00 4.039400e+00 1.060473e+00 4050 1 + 40 4.125000e+00 4.039286e+00 1.720881e+00 5400 1 + 50 3.925000e+00 4.039078e+00 2.568094e+00 6750 1 + 60 3.875000e+00 4.039004e+00 3.473380e+00 8100 1 + 70 3.918750e+00 4.039008e+00 4.585967e+00 9450 1 + 80 3.600000e+00 4.038911e+00 5.747896e+00 10800 1 + 90 4.250000e+00 4.038874e+00 7.041694e+00 12150 1 + 100 5.400000e+00 4.038820e+00 8.425379e+00 13500 1 + 110 3.000000e+00 4.038795e+00 9.923730e+00 14850 1 + 120 3.000000e+00 4.038812e+00 1.150825e+01 16200 1 + 130 2.993750e+00 4.038782e+00 1.320263e+01 17550 1 + 140 4.406250e+00 4.038770e+00 1.508397e+01 18900 1 + 150 5.625000e+00 4.038777e+00 1.698754e+01 20250 1 + 160 3.081250e+00 4.038772e+00 1.895570e+01 21600 1 + 165 5.006250e+00 4.038772e+00 2.003449e+01 22275 1 ------------------------------------------------------------------- status : time_limit -total time (s) : 2.003571e+01 -total solves : 22950 -best bound : 4.037586e+00 -simulation ci : 4.072096e+00 ± 1.147962e-01 +total time (s) : 2.003449e+01 +total solves : 22275 +best bound : 4.038772e+00 +simulation ci : 4.070947e+00 ± 1.188614e-01 numeric issues : 0 ------------------------------------------------------------------- @@ -1352,19 +1351,18 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 3.075152e+00 1.161916e+00 4.014568e-01 1680 1 - 20 2.990147e+00 1.167070e+00 4.960868e-01 2560 1 - 30 2.537098e+00 1.167299e+00 8.817201e-01 4240 1 - 40 3.173765e+00 1.167299e+00 9.791501e-01 5120 1 - 50 3.509464e+00 1.167299e+00 1.372273e+00 6800 1 - 60 4.637198e+00 1.167410e+00 1.510682e+00 7680 1 - 63 3.068220e+00 1.167410e+00 1.542017e+00 7944 1 + 10 3.426289e+00 1.163128e+00 3.929579e-01 1680 1 + 20 2.386729e+00 1.163467e+00 4.889431e-01 2560 1 + 30 3.405925e+00 1.165481e+00 8.810191e-01 4240 1 + 40 3.219206e+00 1.165481e+00 9.849341e-01 5120 1 + 50 3.074686e+00 1.165481e+00 1.385555e+00 6800 1 + 60 3.224080e+00 1.165481e+00 1.488954e+00 7680 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.542017e+00 -total solves : 7944 -best bound : 1.167410e+00 -simulation ci : 3.215855e+00 ± 1.095737e-01 +total time (s) : 1.488954e+00 +total solves : 7680 +best bound : 1.165481e+00 +simulation ci : 3.299213e+00 ± 1.277496e-01 numeric issues : 0 ------------------------------------------------------------------- @@ -1396,16 +1394,16 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -9.800000e+01 -5.809615e+01 3.419590e-02 78 1 - 20 -4.000000e+01 -5.809615e+01 7.143688e-02 148 1 - 30 -4.000000e+01 -5.809615e+01 1.092849e-01 226 1 - 40 -4.000000e+01 -5.809615e+01 1.427970e-01 296 1 + 10 -4.000000e+01 -5.809615e+01 3.133106e-02 78 1 + 20 -4.000000e+01 -5.809615e+01 6.373596e-02 148 1 + 30 -4.700000e+01 -5.809615e+01 1.023810e-01 226 1 + 40 -4.000000e+01 -5.809615e+01 1.361670e-01 296 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.427970e-01 +total time (s) : 1.361670e-01 total solves : 296 best bound : -5.809615e+01 -simulation ci : -5.508750e+01 ± 7.745664e+00 +simulation ci : -5.188750e+01 ± 7.419070e+00 numeric issues : 0 ------------------------------------------------------------------- @@ -1437,16 +1435,16 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -4.000000e+01 -6.196125e+01 3.858209e-02 138 1 - 20 -6.300000e+01 -6.196125e+01 7.412910e-02 258 1 - 30 -4.000000e+01 -6.196125e+01 1.231642e-01 396 1 - 40 -9.800000e+01 -6.196125e+01 1.596701e-01 516 1 + 10 -4.700000e+01 -6.196125e+01 4.044700e-02 138 1 + 20 -9.800000e+01 -6.196125e+01 7.669592e-02 258 1 + 30 -7.500000e+01 -6.196125e+01 1.264119e-01 396 1 + 40 -6.300000e+01 -6.196125e+01 1.642599e-01 516 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.596701e-01 +total time (s) : 1.642599e-01 total solves : 516 best bound : -6.196125e+01 -simulation ci : -5.211250e+01 ± 5.462441e+00 +simulation ci : -5.548750e+01 ± 5.312051e+00 numeric issues : 0 ------------------------------------------------------------------- @@ -1478,16 +1476,16 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 10 -4.700000e+01 -6.546793e+01 7.615209e-02 462 1 - 20 -5.600000e+01 -6.546793e+01 1.345971e-01 852 1 - 30 -8.200000e+01 -6.546793e+01 2.454309e-01 1314 1 - 40 -8.200000e+01 -6.546793e+01 3.039951e-01 1704 1 + 10 -8.200000e+01 -6.546793e+01 7.644391e-02 462 1 + 20 -7.000000e+01 -6.546793e+01 1.428950e-01 852 1 + 30 -6.300000e+01 -6.546793e+01 2.591200e-01 1314 1 + 40 -4.700000e+01 -6.546793e+01 3.199151e-01 1704 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 3.039951e-01 +total time (s) : 3.199151e-01 total solves : 1704 best bound : -6.546793e+01 -simulation ci : -6.211250e+01 ± 5.560515e+00 +simulation ci : -6.263750e+01 ± 5.346304e+00 numeric issues : 0 ------------------------------------------------------------------- @@ -1518,14 +1516,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1L 6.000000e+00 1.200000e+01 4.097199e-02 11 1 - 40L 6.000000e+00 8.000000e+00 4.075310e-01 602 1 + 1L 3.000000e+00 1.422222e+01 4.147816e-02 11 1 + 40L 6.000000e+00 8.000000e+00 5.456250e-01 602 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 4.075310e-01 +total time (s) : 5.456250e-01 total solves : 602 best bound : 8.000000e+00 -simulation ci : 7.650000e+00 ± 8.140491e-01 +simulation ci : 7.125000e+00 ± 7.499254e-01 numeric issues : 0 ------------------------------------------------------------------- @@ -1556,14 +1554,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 -9.800000e+04 4.922260e+05 8.704495e-02 6 1 - 40 1.093500e+05 1.083900e+05 1.160882e-01 240 1 + 1 -9.800000e+04 4.922260e+05 8.721399e-02 6 1 + 40 4.882000e+04 1.083900e+05 1.163750e-01 240 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 1.160882e-01 +total time (s) : 1.163750e-01 total solves : 240 best bound : 1.083900e+05 -simulation ci : 9.763370e+04 ± 1.992771e+04 +simulation ci : 1.002754e+05 ± 2.174010e+04 numeric issues : 0 ------------------------------------------------------------------- diff --git a/previews/PR797/examples/SDDP_0.0.log b/previews/PR797/examples/SDDP_0.0.log index 467f24e67..d65a3ea9e 100644 --- a/previews/PR797/examples/SDDP_0.0.log +++ b/previews/PR797/examples/SDDP_0.0.log @@ -19,11 +19,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 0.000000e+00 9.258032e-03 36 1 - 10 0.000000e+00 0.000000e+00 2.888799e-02 360 1 + 1 0.000000e+00 0.000000e+00 9.275913e-03 36 1 + 10 0.000000e+00 0.000000e+00 6.656289e-02 360 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 2.888799e-02 +total time (s) : 6.656289e-02 total solves : 360 best bound : 0.000000e+00 simulation ci : 0.000000e+00 ± 0.000000e+00 diff --git a/previews/PR797/examples/SDDP_0.0625.log b/previews/PR797/examples/SDDP_0.0625.log index 150a2b2bc..d6da3691f 100644 --- a/previews/PR797/examples/SDDP_0.0625.log +++ b/previews/PR797/examples/SDDP_0.0625.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.437500e+01 5.937500e+01 3.138065e-03 3375 1 - 10 3.750000e+01 5.938557e+01 3.082395e-02 3699 1 + 1 3.437500e+01 5.937500e+01 3.843069e-03 3375 1 + 10 3.750000e+01 5.938557e+01 3.292513e-02 3699 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.082395e-02 +total time (s) : 3.292513e-02 total solves : 3699 best bound : 5.938557e+01 simulation ci : 5.906250e+01 ± 1.352595e+01 diff --git a/previews/PR797/examples/SDDP_0.125.log b/previews/PR797/examples/SDDP_0.125.log index 0212ba0d2..5e8d0213c 100644 --- a/previews/PR797/examples/SDDP_0.125.log +++ b/previews/PR797/examples/SDDP_0.125.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.675000e+02 1.129545e+02 3.178835e-03 1891 1 - 10 1.362500e+02 1.129771e+02 3.011298e-02 2215 1 + 1 1.675000e+02 1.129545e+02 2.898932e-03 1891 1 + 10 1.362500e+02 1.129771e+02 3.076887e-02 2215 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.011298e-02 +total time (s) : 3.076887e-02 total solves : 2215 best bound : 1.129771e+02 simulation ci : 1.176375e+02 ± 1.334615e+01 diff --git a/previews/PR797/examples/SDDP_0.25.log b/previews/PR797/examples/SDDP_0.25.log index 6c3245ff0..932320a95 100644 --- a/previews/PR797/examples/SDDP_0.25.log +++ b/previews/PR797/examples/SDDP_0.25.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.887500e+02 1.995243e+02 2.701044e-03 1149 1 - 10 2.962500e+02 2.052855e+02 2.930593e-02 1473 1 + 1 1.887500e+02 1.995243e+02 2.982855e-03 1149 1 + 10 2.962500e+02 2.052855e+02 3.110385e-02 1473 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 2.930593e-02 +total time (s) : 3.110385e-02 total solves : 1473 best bound : 2.052855e+02 simulation ci : 2.040201e+02 ± 3.876873e+01 diff --git a/previews/PR797/examples/SDDP_0.375.log b/previews/PR797/examples/SDDP_0.375.log index 7c5c372dd..cd52728fa 100644 --- a/previews/PR797/examples/SDDP_0.375.log +++ b/previews/PR797/examples/SDDP_0.375.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.562500e+02 2.788373e+02 3.196955e-03 2262 1 - 10 2.375000e+02 2.795671e+02 3.269506e-02 2586 1 + 1 2.562500e+02 2.788373e+02 3.362894e-03 2262 1 + 10 2.375000e+02 2.795671e+02 3.389001e-02 2586 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.269506e-02 +total time (s) : 3.389001e-02 total solves : 2586 best bound : 2.795671e+02 simulation ci : 2.375000e+02 ± 3.099032e+01 diff --git a/previews/PR797/examples/SDDP_0.5.log b/previews/PR797/examples/SDDP_0.5.log index 910a54f44..1d0a0c885 100644 --- a/previews/PR797/examples/SDDP_0.5.log +++ b/previews/PR797/examples/SDDP_0.5.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 4.850000e+02 3.349793e+02 3.360033e-03 778 1 - 10 3.550000e+02 3.468286e+02 3.125787e-02 1102 1 + 1 4.850000e+02 3.349793e+02 3.111839e-03 778 1 + 10 3.550000e+02 3.468286e+02 3.192997e-02 1102 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.125787e-02 +total time (s) : 3.192997e-02 total solves : 1102 best bound : 3.468286e+02 simulation ci : 3.948309e+02 ± 7.954180e+01 diff --git a/previews/PR797/examples/SDDP_0.625.log b/previews/PR797/examples/SDDP_0.625.log index d846be280..692a71cba 100644 --- a/previews/PR797/examples/SDDP_0.625.log +++ b/previews/PR797/examples/SDDP_0.625.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.812500e+02 4.072952e+02 3.537178e-03 2633 1 - 10 5.818750e+02 4.080500e+02 3.491998e-02 2957 1 + 1 3.812500e+02 4.072952e+02 3.739119e-03 2633 1 + 10 5.818750e+02 4.080500e+02 3.642511e-02 2957 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.491998e-02 +total time (s) : 3.642511e-02 total solves : 2957 best bound : 4.080500e+02 simulation ci : 4.235323e+02 ± 1.029245e+02 diff --git a/previews/PR797/examples/SDDP_0.75.log b/previews/PR797/examples/SDDP_0.75.log index 45a59cde6..6bdc72ad4 100644 --- a/previews/PR797/examples/SDDP_0.75.log +++ b/previews/PR797/examples/SDDP_0.75.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.737500e+02 4.626061e+02 3.299952e-03 1520 1 - 10 2.450000e+02 4.658509e+02 3.354406e-02 1844 1 + 1 3.737500e+02 4.626061e+02 3.888130e-03 1520 1 + 10 2.450000e+02 4.658509e+02 3.495312e-02 1844 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.354406e-02 +total time (s) : 3.495312e-02 total solves : 1844 best bound : 4.658509e+02 simulation ci : 3.907376e+02 ± 9.045105e+01 diff --git a/previews/PR797/examples/SDDP_0.875.log b/previews/PR797/examples/SDDP_0.875.log index 43ab57f07..d0b61c53b 100644 --- a/previews/PR797/examples/SDDP_0.875.log +++ b/previews/PR797/examples/SDDP_0.875.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 8.525000e+02 5.197742e+02 3.574133e-03 3004 1 - 10 4.493750e+02 5.211793e+02 3.670716e-02 3328 1 + 1 8.525000e+02 5.197742e+02 3.633022e-03 3004 1 + 10 4.493750e+02 5.211793e+02 3.788209e-02 3328 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 3.670716e-02 +total time (s) : 3.788209e-02 total solves : 3328 best bound : 5.211793e+02 simulation ci : 5.268125e+02 ± 1.227709e+02 diff --git a/previews/PR797/examples/SDDP_1.0.log b/previews/PR797/examples/SDDP_1.0.log index 58301e284..01318f52d 100644 --- a/previews/PR797/examples/SDDP_1.0.log +++ b/previews/PR797/examples/SDDP_1.0.log @@ -20,11 +20,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 6.750000e+02 5.500000e+02 2.701998e-03 407 1 - 10 4.500000e+02 5.733959e+02 2.942181e-02 731 1 + 1 6.750000e+02 5.500000e+02 2.838135e-03 407 1 + 10 4.500000e+02 5.733959e+02 3.033614e-02 731 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 2.942181e-02 +total time (s) : 3.033614e-02 total solves : 731 best bound : 5.733959e+02 simulation ci : 5.000000e+02 ± 1.079583e+02 diff --git a/previews/PR797/examples/StochDynamicProgramming.jl_multistock/index.html b/previews/PR797/examples/StochDynamicProgramming.jl_multistock/index.html index cb4240a0e..654803fcd 100644 --- a/previews/PR797/examples/StochDynamicProgramming.jl_multistock/index.html +++ b/previews/PR797/examples/StochDynamicProgramming.jl_multistock/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

StochDynamicProgramming: the multistock problem

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example comes from StochDynamicProgramming.jl.

using SDDP, HiGHS, Test
+

StochDynamicProgramming: the multistock problem

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example comes from StochDynamicProgramming.jl.

using SDDP, HiGHS, Test
 
 function test_multistock_example()
     model = SDDP.LinearPolicyGraph(;
@@ -80,21 +80,21 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10  -3.878303e+00 -4.434982e+00  1.973391e-01      1400   1
-        20  -4.262885e+00 -4.399265e+00  3.153272e-01      2800   1
-        30  -3.075162e+00 -4.382527e+00  4.400902e-01      4200   1
-        40  -3.761147e+00 -4.369587e+00  5.725801e-01      5600   1
-        50  -4.323162e+00 -4.362199e+00  7.118852e-01      7000   1
-        60  -3.654943e+00 -4.358401e+00  8.551462e-01      8400   1
-        70  -4.010883e+00 -4.357368e+00  9.963672e-01      9800   1
-        80  -4.314412e+00 -4.355714e+00  1.143003e+00     11200   1
-        90  -4.542422e+00 -4.353708e+00  1.354980e+00     12600   1
-       100  -4.178952e+00 -4.351685e+00  1.504494e+00     14000   1
+        10  -3.878303e+00 -4.434982e+00  1.926260e-01      1400   1
+        20  -4.262885e+00 -4.399265e+00  3.126850e-01      2800   1
+        30  -3.075162e+00 -4.382527e+00  4.949551e-01      4200   1
+        40  -3.761147e+00 -4.369587e+00  6.276181e-01      5600   1
+        50  -4.323162e+00 -4.362199e+00  7.675850e-01      7000   1
+        60  -3.654943e+00 -4.358401e+00  9.108100e-01      8400   1
+        70  -4.010883e+00 -4.357368e+00  1.055288e+00      9800   1
+        80  -4.314412e+00 -4.355714e+00  1.204105e+00     11200   1
+        90  -4.542422e+00 -4.353708e+00  1.358413e+00     12600   1
+       100  -4.178952e+00 -4.351685e+00  1.507290e+00     14000   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 1.504494e+00
+total time (s) : 1.507290e+00
 total solves   : 14000
 best bound     : -4.351685e+00
 simulation ci  : -4.246786e+00 ± 8.703997e-02
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/StochDynamicProgramming.jl_stock/index.html b/previews/PR797/examples/StochDynamicProgramming.jl_stock/index.html index 8664aa309..822cf40b6 100644 --- a/previews/PR797/examples/StochDynamicProgramming.jl_stock/index.html +++ b/previews/PR797/examples/StochDynamicProgramming.jl_stock/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

StochDynamicProgramming: the stock problem

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example comes from StochDynamicProgramming.jl.

using SDDP, HiGHS, Test
+

StochDynamicProgramming: the stock problem

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example comes from StochDynamicProgramming.jl.

using SDDP, HiGHS, Test
 
 function stock_example()
     model = SDDP.PolicyGraph(
@@ -57,18 +57,18 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10  -1.573154e+00 -1.474247e+00  6.890011e-02      1050   1
-        20  -1.346690e+00 -1.471483e+00  1.073132e-01      1600   1
-        30  -1.308031e+00 -1.471307e+00  1.897640e-01      2650   1
-        40  -1.401200e+00 -1.471167e+00  2.311590e-01      3200   1
-        50  -1.557483e+00 -1.471097e+00  3.173201e-01      4250   1
-        60  -1.534169e+00 -1.471075e+00  3.621981e-01      4800   1
-        65  -1.689864e+00 -1.471075e+00  3.846991e-01      5075   1
+        10  -1.573154e+00 -1.474247e+00  7.284117e-02      1050   1
+        20  -1.346690e+00 -1.471483e+00  1.128891e-01      1600   1
+        30  -1.308031e+00 -1.471307e+00  2.000482e-01      2650   1
+        40  -1.401200e+00 -1.471167e+00  2.435000e-01      3200   1
+        50  -1.557483e+00 -1.471097e+00  3.391612e-01      4250   1
+        60  -1.534169e+00 -1.471075e+00  3.864150e-01      4800   1
+        65  -1.689864e+00 -1.471075e+00  4.101441e-01      5075   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 3.846991e-01
+total time (s) : 4.101441e-01
 total solves   : 5075
 best bound     : -1.471075e+00
 simulation ci  : -1.484094e+00 ± 4.058993e-02
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/StructDualDynProg.jl_prob5.2_2stages/index.html b/previews/PR797/examples/StructDualDynProg.jl_prob5.2_2stages/index.html index aacd49fb9..4d9cc4454 100644 --- a/previews/PR797/examples/StructDualDynProg.jl_prob5.2_2stages/index.html +++ b/previews/PR797/examples/StructDualDynProg.jl_prob5.2_2stages/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

StructDualDynProg: Problem 5.2, 2 stages

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example comes from StochasticDualDynamicProgramming.jl

using SDDP, HiGHS, Test
+

StructDualDynProg: Problem 5.2, 2 stages

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example comes from StochasticDualDynamicProgramming.jl

using SDDP, HiGHS, Test
 
 function test_prob52_2stages()
     model = SDDP.LinearPolicyGraph(;
@@ -85,16 +85,16 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10   3.455904e+05  3.147347e+05  8.348942e-03        54   1
-        20   3.336455e+05  3.402383e+05  1.453710e-02       104   1
-        30   3.337559e+05  3.403155e+05  2.199411e-02       158   1
-        40   3.337559e+05  3.403155e+05  2.912116e-02       208   1
-        48   3.337559e+05  3.403155e+05  3.532600e-02       248   1
+        10   3.455904e+05  3.147347e+05  8.333206e-03        54   1
+        20   3.336455e+05  3.402383e+05  1.453018e-02       104   1
+        30   3.337559e+05  3.403155e+05  2.197099e-02       158   1
+        40   3.337559e+05  3.403155e+05  2.923417e-02       208   1
+        48   3.337559e+05  3.403155e+05  3.553319e-02       248   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 3.532600e-02
+total time (s) : 3.553319e-02
 total solves   : 248
 best bound     :  3.403155e+05
 simulation ci  :  1.351676e+08 ± 1.785770e+08
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/StructDualDynProg.jl_prob5.2_3stages/index.html b/previews/PR797/examples/StructDualDynProg.jl_prob5.2_3stages/index.html index eeee1a2e2..ed43837d7 100644 --- a/previews/PR797/examples/StructDualDynProg.jl_prob5.2_3stages/index.html +++ b/previews/PR797/examples/StructDualDynProg.jl_prob5.2_3stages/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

StructDualDynProg: Problem 5.2, 3 stages

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example comes from StochasticDualDynamicProgramming.jl.

using SDDP, HiGHS, Test
+

StructDualDynProg: Problem 5.2, 3 stages

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example comes from StochasticDualDynamicProgramming.jl.

using SDDP, HiGHS, Test
 
 function test_prob52_3stages()
     model = SDDP.LinearPolicyGraph(;
@@ -81,16 +81,16 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10   4.403329e+05  3.509666e+05  1.333809e-02        92   1
-        20   4.055335e+05  4.054833e+05  2.393723e-02       172   1
-        30   3.959476e+05  4.067125e+05  3.701711e-02       264   1
-        40   3.959476e+05  4.067125e+05  5.015016e-02       344   1
-        47   3.959476e+05  4.067125e+05  6.019616e-02       400   1
+        10   4.403329e+05  3.509666e+05  1.391387e-02        92   1
+        20   4.055335e+05  4.054833e+05  2.501893e-02       172   1
+        30   3.959476e+05  4.067125e+05  8.837080e-02       264   1
+        40   3.959476e+05  4.067125e+05  1.018989e-01       344   1
+        47   3.959476e+05  4.067125e+05  1.120598e-01       400   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 6.019616e-02
+total time (s) : 1.120598e-01
 total solves   : 400
 best bound     :  4.067125e+05
 simulation ci  :  2.695623e+07 ± 3.645336e+07
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/agriculture_mccardle_farm/index.html b/previews/PR797/examples/agriculture_mccardle_farm/index.html index d21bcc273..711a7b5df 100644 --- a/previews/PR797/examples/agriculture_mccardle_farm/index.html +++ b/previews/PR797/examples/agriculture_mccardle_farm/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

The farm planning problem

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

There are four stages. The first stage is a deterministic planning stage. The next three are wait-and-see operational stages. The uncertainty in the three operational stages is a Markov chain for weather. There are three Markov states: dry, normal, and wet.

Inspired by R. McCardle, Farm management optimization. Masters thesis, University of Louisville, Louisville, Kentucky, United States of America (2009).

All data, including short variable names, is taken from that thesis.

using SDDP, HiGHS, Test
+

The farm planning problem

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

There are four stages. The first stage is a deterministic planning stage. The next three are wait-and-see operational stages. The uncertainty in the three operational stages is a Markov chain for weather. There are three Markov states: dry, normal, and wet.

Inspired by R. McCardle, Farm management optimization. Masters thesis, University of Louisville, Louisville, Kentucky, United States of America (2009).

All data, including short variable names, is taken from that thesis.

using SDDP, HiGHS, Test
 
 function test_mccardle_farm_model()
     S = [  # cutting, stage
@@ -124,4 +124,4 @@
     @test SDDP.calculate_bound(model) ≈ 4074.1391 atol = 1e-5
 end
 
-test_mccardle_farm_model()
Test Passed
+test_mccardle_farm_model()
Test Passed
diff --git a/previews/PR797/examples/air_conditioning/index.html b/previews/PR797/examples/air_conditioning/index.html index bf7b75c51..b3c16afe7 100644 --- a/previews/PR797/examples/air_conditioning/index.html +++ b/previews/PR797/examples/air_conditioning/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Air conditioning

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

Taken from Anthony Papavasiliou's notes on SDDP

Consider the following problem

  • Produce air conditioners for 3 months
  • 200 units/month at 100 $/unit
  • Overtime costs 300 $/unit
  • Known demand of 100 units for period 1
  • Equally likely demand, 100 or 300 units, for periods 2, 3
  • Storage cost is 50 $/unit
  • All demand must be met

The known optimal solution is $62,500

using SDDP, HiGHS, Test
+

Air conditioning

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

Taken from Anthony Papavasiliou's notes on SDDP

Consider the following problem

  • Produce air conditioners for 3 months
  • 200 units/month at 100 $/unit
  • Overtime costs 300 $/unit
  • Known demand of 100 units for period 1
  • Equally likely demand, 100 or 300 units, for periods 2, 3
  • Storage cost is 50 $/unit
  • All demand must be met

The known optimal solution is $62,500

using SDDP, HiGHS, Test
 
 function air_conditioning_model(duality_handler)
     model = SDDP.LinearPolicyGraph(;
@@ -66,11 +66,11 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1L  7.000000e+04  6.166667e+04  5.694509e-01         8   1
-        40L  5.500000e+04  6.250000e+04  8.037488e-01       344   1
+         1L  7.000000e+04  6.166667e+04  5.688460e-01         8   1
+        40L  5.500000e+04  6.250000e+04  8.131721e-01       344   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 8.037488e-01
+total time (s) : 8.131721e-01
 total solves   : 344
 best bound     :  6.250000e+04
 simulation ci  :  6.091250e+04 ± 6.325667e+03
@@ -103,13 +103,13 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   3.000000e+04  6.250000e+04  3.943920e-03         8   1
-        20   4.000000e+04  6.250000e+04  4.461193e-02       172   1
+         1   3.000000e+04  6.250000e+04  3.880978e-03         8   1
+        20   4.000000e+04  6.250000e+04  4.516983e-02       172   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 4.461193e-02
+total time (s) : 4.516983e-02
 total solves   : 172
 best bound     :  6.250000e+04
 simulation ci  :  5.650000e+04 ± 6.785916e+03
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/air_conditioning_forward/index.html b/previews/PR797/examples/air_conditioning_forward/index.html index 79830057d..906a45bb2 100644 --- a/previews/PR797/examples/air_conditioning_forward/index.html +++ b/previews/PR797/examples/air_conditioning_forward/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Training with a different forward model

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP
+
+Test.@test isapprox(SDDP.calculate_bound(convex), 62_500.0, atol = 0.1)
Test Passed
diff --git a/previews/PR797/examples/all_blacks/index.html b/previews/PR797/examples/all_blacks/index.html index cf491c154..f84ad49be 100644 --- a/previews/PR797/examples/all_blacks/index.html +++ b/previews/PR797/examples/all_blacks/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Deterministic All Blacks

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP, HiGHS, Test
+

Deterministic All Blacks

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP, HiGHS, Test
 
 function all_blacks()
     # Number of time periods, number of seats, R_ij = revenue from selling seat
@@ -61,13 +61,13 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1L  6.000000e+00  9.000000e+00  3.937697e-02         6   1
-        20L  9.000000e+00  9.000000e+00  8.010602e-02       123   1
+         1L  6.000000e+00  9.000000e+00  3.900313e-02         6   1
+        20L  9.000000e+00  9.000000e+00  7.972097e-02       123   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 8.010602e-02
+total time (s) : 7.972097e-02
 total solves   : 123
 best bound     :  9.000000e+00
 simulation ci  :  8.850000e+00 ± 2.940000e-01
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/asset_management_simple/index.html b/previews/PR797/examples/asset_management_simple/index.html index 506b1adae..d5686c44c 100644 --- a/previews/PR797/examples/asset_management_simple/index.html +++ b/previews/PR797/examples/asset_management_simple/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Asset management

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

Taken from the book J.R. Birge, F. Louveaux, Introduction to Stochastic Programming, Springer Series in Operations Research and Financial Engineering, Springer New York, New York, NY, 2011

using SDDP, HiGHS, Test
+

Asset management

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

Taken from the book J.R. Birge, F. Louveaux, Introduction to Stochastic Programming, Springer Series in Operations Research and Financial Engineering, Springer New York, New York, NY, 2011

using SDDP, HiGHS, Test
 
 function asset_management_simple()
     model = SDDP.PolicyGraph(
@@ -74,19 +74,19 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         5  -1.620000e+00 -8.522173e-01  1.323819e-02        87   1
-        10  -1.847411e-13  1.392784e+00  1.999807e-02       142   1
-        15  -6.963319e-13  1.514085e+00  2.767110e-02       197   1
-        20   1.136868e-13  1.514085e+00  3.539515e-02       252   1
-        25  -1.080025e-12  1.514085e+00  9.109902e-02       339   1
-        30   1.136868e-13  1.514085e+00  9.960604e-02       394   1
-        35  -2.479988e+01  1.514085e+00  1.085091e-01       449   1
-        40   1.136868e-13  1.514085e+00  1.179650e-01       504   1
+         5  -1.620000e+00 -8.522173e-01  1.324701e-02        87   1
+        10  -1.847411e-13  1.392784e+00  2.023602e-02       142   1
+        15  -6.963319e-13  1.514085e+00  2.795506e-02       197   1
+        20   1.136868e-13  1.514085e+00  3.572106e-02       252   1
+        25  -1.080025e-12  1.514085e+00  9.403610e-02       339   1
+        30   1.136868e-13  1.514085e+00  1.026909e-01       394   1
+        35  -2.479988e+01  1.514085e+00  1.117070e-01       449   1
+        40   1.136868e-13  1.514085e+00  1.212301e-01       504   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 1.179650e-01
+total time (s) : 1.212301e-01
 total solves   : 504
 best bound     :  1.514085e+00
 simulation ci  :  3.429060e+00 ± 6.665883e+00
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/asset_management_stagewise/index.html b/previews/PR797/examples/asset_management_stagewise/index.html index 54472f463..dae86dde5 100644 --- a/previews/PR797/examples/asset_management_stagewise/index.html +++ b/previews/PR797/examples/asset_management_stagewise/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Asset management with modifications

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

A modified version of the Asset Management Problem Taken from the book J.R. Birge, F. Louveaux, Introduction to Stochastic Programming, Springer Series in Operations Research and Financial Engineering, Springer New York, New York, NY, 2011

using SDDP, HiGHS, Test
+

Asset management with modifications

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

A modified version of the Asset Management Problem Taken from the book J.R. Birge, F. Louveaux, Introduction to Stochastic Programming, Springer Series in Operations Research and Financial Engineering, Springer New York, New York, NY, 2011

using SDDP, HiGHS, Test
 
 function asset_management_stagewise(; cut_type)
     w_s = [1.25, 1.06]
@@ -91,14 +91,14 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10   4.375000e+00  6.798204e+00  1.507630e-01       278   1
-        20   1.230957e+01  1.358825e+00  1.964672e-01       428   1
-        30   8.859026e+00  1.278410e+00  2.287230e-01       706   1
-        40  -2.315795e+01  1.278410e+00  2.517691e-01       856   1
-        49   3.014193e+01  1.278410e+00  2.728870e-01       991   1
+        10   4.375000e+00  6.798204e+00  1.525280e-01       278   1
+        20   1.230957e+01  1.358825e+00  1.701591e-01       428   1
+        30   8.859026e+00  1.278410e+00  2.023129e-01       706   1
+        40  -2.315795e+01  1.278410e+00  2.255621e-01       856   1
+        49   3.014193e+01  1.278410e+00  2.473340e-01       991   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 2.728870e-01
+total time (s) : 2.473340e-01
 total solves   : 991
 best bound     :  1.278410e+00
 simulation ci  : -1.755629e+00 ± 5.526921e+00
@@ -130,15 +130,15 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10  -2.454000e+00  1.647363e+00  3.616905e-02       278   1
-        20  -3.575118e+00  1.278410e+00  6.460500e-02       428   1
-        30  -5.003795e+01  1.278410e+00  1.107509e-01       706   1
-        40   6.835609e+00  1.278410e+00  1.499109e-01       856   1
+        10  -2.454000e+00  1.647363e+00  3.704500e-02       278   1
+        20  -3.575118e+00  1.278410e+00  6.618023e-02       428   1
+        30  -5.003795e+01  1.278410e+00  1.144490e-01       706   1
+        40   6.835609e+00  1.278410e+00  1.569302e-01       856   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 1.499109e-01
+total time (s) : 1.569302e-01
 total solves   : 856
 best bound     :  1.278410e+00
 simulation ci  :  4.369345e+00 ± 4.780393e+00
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/belief/index.html b/previews/PR797/examples/belief/index.html index 4537dfb26..afa8c1629 100644 --- a/previews/PR797/examples/belief/index.html +++ b/previews/PR797/examples/belief/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Partially observable inventory management

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP, HiGHS, Random, Statistics, Test
+

Partially observable inventory management

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP, HiGHS, Random, Statistics, Test
 
 function inventory_management_problem()
     demand_values = [1.0, 2.0]
@@ -94,21 +94,21 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10   4.787277e+00  9.346930e+00  1.398145e+00       900   1
-        20   6.374753e+00  1.361934e+01  1.562721e+00      1720   1
-        30   2.813321e+01  1.651297e+01  1.909971e+00      3036   1
-        40   1.654759e+01  1.632970e+01  2.247956e+00      4192   1
-        50   3.570941e+00  1.846889e+01  2.498539e+00      5020   1
-        60   1.087425e+01  1.890254e+01  2.778576e+00      5808   1
-        70   9.381610e+00  1.940320e+01  3.060148e+00      6540   1
-        80   5.648731e+01  1.962435e+01  3.272938e+00      7088   1
-        90   3.879273e+01  1.981008e+01  3.757243e+00      8180   1
-       100   7.870187e+00  1.997117e+01  3.973397e+00      8664   1
+        10   4.787277e+00  9.346930e+00  1.394588e+00       900   1
+        20   6.374753e+00  1.361934e+01  1.604891e+00      1720   1
+        30   2.813321e+01  1.651297e+01  1.935454e+00      3036   1
+        40   1.654759e+01  1.632970e+01  2.307615e+00      4192   1
+        50   3.570941e+00  1.846889e+01  2.575974e+00      5020   1
+        60   1.087425e+01  1.890254e+01  2.870635e+00      5808   1
+        70   9.381610e+00  1.940320e+01  3.166286e+00      6540   1
+        80   5.648731e+01  1.962435e+01  3.395494e+00      7088   1
+        90   3.879273e+01  1.981008e+01  3.906569e+00      8180   1
+       100   7.870187e+00  1.997117e+01  4.144914e+00      8664   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 3.973397e+00
+total time (s) : 4.144914e+00
 total solves   : 8664
 best bound     :  1.997117e+01
 simulation ci  :  2.275399e+01 ± 4.541987e+00
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/biobjective_hydro/index.html b/previews/PR797/examples/biobjective_hydro/index.html index 141e8e1e4..b3958cdb3 100644 --- a/previews/PR797/examples/biobjective_hydro/index.html +++ b/previews/PR797/examples/biobjective_hydro/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Biobjective hydro-thermal

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP, HiGHS, Statistics, Test
+

Biobjective hydro-thermal

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP, HiGHS, Statistics, Test
 
 function biobjective_example()
     model = SDDP.LinearPolicyGraph(;
@@ -80,11 +80,11 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   0.000000e+00  0.000000e+00  9.258032e-03        36   1
-        10   0.000000e+00  0.000000e+00  2.888799e-02       360   1
+         1   0.000000e+00  0.000000e+00  9.275913e-03        36   1
+        10   0.000000e+00  0.000000e+00  6.656289e-02       360   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 2.888799e-02
+total time (s) : 6.656289e-02
 total solves   : 360
 best bound     :  0.000000e+00
 simulation ci  :  0.000000e+00 ± 0.000000e+00
@@ -113,11 +113,11 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   6.750000e+02  5.500000e+02  2.701998e-03       407   1
-        10   4.500000e+02  5.733959e+02  2.942181e-02       731   1
+         1   6.750000e+02  5.500000e+02  2.838135e-03       407   1
+        10   4.500000e+02  5.733959e+02  3.033614e-02       731   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 2.942181e-02
+total time (s) : 3.033614e-02
 total solves   : 731
 best bound     :  5.733959e+02
 simulation ci  :  5.000000e+02 ± 1.079583e+02
@@ -146,11 +146,11 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   4.850000e+02  3.349793e+02  3.360033e-03       778   1
-        10   3.550000e+02  3.468286e+02  3.125787e-02      1102   1
+         1   4.850000e+02  3.349793e+02  3.111839e-03       778   1
+        10   3.550000e+02  3.468286e+02  3.192997e-02      1102   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 3.125787e-02
+total time (s) : 3.192997e-02
 total solves   : 1102
 best bound     :  3.468286e+02
 simulation ci  :  3.948309e+02 ± 7.954180e+01
@@ -179,11 +179,11 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   1.887500e+02  1.995243e+02  2.701044e-03      1149   1
-        10   2.962500e+02  2.052855e+02  2.930593e-02      1473   1
+         1   1.887500e+02  1.995243e+02  2.982855e-03      1149   1
+        10   2.962500e+02  2.052855e+02  3.110385e-02      1473   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 2.930593e-02
+total time (s) : 3.110385e-02
 total solves   : 1473
 best bound     :  2.052855e+02
 simulation ci  :  2.040201e+02 ± 3.876873e+01
@@ -212,11 +212,11 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   3.737500e+02  4.626061e+02  3.299952e-03      1520   1
-        10   2.450000e+02  4.658509e+02  3.354406e-02      1844   1
+         1   3.737500e+02  4.626061e+02  3.888130e-03      1520   1
+        10   2.450000e+02  4.658509e+02  3.495312e-02      1844   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 3.354406e-02
+total time (s) : 3.495312e-02
 total solves   : 1844
 best bound     :  4.658509e+02
 simulation ci  :  3.907376e+02 ± 9.045105e+01
@@ -245,11 +245,11 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   1.675000e+02  1.129545e+02  3.178835e-03      1891   1
-        10   1.362500e+02  1.129771e+02  3.011298e-02      2215   1
+         1   1.675000e+02  1.129545e+02  2.898932e-03      1891   1
+        10   1.362500e+02  1.129771e+02  3.076887e-02      2215   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 3.011298e-02
+total time (s) : 3.076887e-02
 total solves   : 2215
 best bound     :  1.129771e+02
 simulation ci  :  1.176375e+02 ± 1.334615e+01
@@ -278,11 +278,11 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   2.562500e+02  2.788373e+02  3.196955e-03      2262   1
-        10   2.375000e+02  2.795671e+02  3.269506e-02      2586   1
+         1   2.562500e+02  2.788373e+02  3.362894e-03      2262   1
+        10   2.375000e+02  2.795671e+02  3.389001e-02      2586   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 3.269506e-02
+total time (s) : 3.389001e-02
 total solves   : 2586
 best bound     :  2.795671e+02
 simulation ci  :  2.375000e+02 ± 3.099032e+01
@@ -311,11 +311,11 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   3.812500e+02  4.072952e+02  3.537178e-03      2633   1
-        10   5.818750e+02  4.080500e+02  3.491998e-02      2957   1
+         1   3.812500e+02  4.072952e+02  3.739119e-03      2633   1
+        10   5.818750e+02  4.080500e+02  3.642511e-02      2957   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 3.491998e-02
+total time (s) : 3.642511e-02
 total solves   : 2957
 best bound     :  4.080500e+02
 simulation ci  :  4.235323e+02 ± 1.029245e+02
@@ -344,11 +344,11 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   8.525000e+02  5.197742e+02  3.574133e-03      3004   1
-        10   4.493750e+02  5.211793e+02  3.670716e-02      3328   1
+         1   8.525000e+02  5.197742e+02  3.633022e-03      3004   1
+        10   4.493750e+02  5.211793e+02  3.788209e-02      3328   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 3.670716e-02
+total time (s) : 3.788209e-02
 total solves   : 3328
 best bound     :  5.211793e+02
 simulation ci  :  5.268125e+02 ± 1.227709e+02
@@ -377,13 +377,13 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   3.437500e+01  5.937500e+01  3.138065e-03      3375   1
-        10   3.750000e+01  5.938557e+01  3.082395e-02      3699   1
+         1   3.437500e+01  5.937500e+01  3.843069e-03      3375   1
+        10   3.750000e+01  5.938557e+01  3.292513e-02      3699   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 3.082395e-02
+total time (s) : 3.292513e-02
 total solves   : 3699
 best bound     :  5.938557e+01
 simulation ci  :  5.906250e+01 ± 1.352595e+01
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/booking_management/index.html b/previews/PR797/examples/booking_management/index.html index b0199ff3d..69aafd5de 100644 --- a/previews/PR797/examples/booking_management/index.html +++ b/previews/PR797/examples/booking_management/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Booking management

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example concerns the acceptance of booking requests for rooms in a hotel in the lead up to a large event.

Each stage, we receive a booking request and can choose to accept or decline it. Once accepted, bookings cannot be terminated.

using SDDP, HiGHS, Test
+

Booking management

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example concerns the acceptance of booking requests for rooms in a hotel in the lead up to a large event.

Each stage, we receive a booking request and can choose to accept or decline it. Once accepted, bookings cannot be terminated.

using SDDP, HiGHS, Test
 
 function booking_management_model(num_days, num_rooms, num_requests)
     # maximum revenue that could be accrued.
@@ -96,4 +96,4 @@
     end
 end
 
-booking_management(SDDP.ContinuousConicDuality())
Test Passed

New version of HiGHS stalls booking_management(SDDP.LagrangianDuality())

+booking_management(SDDP.ContinuousConicDuality())
Test Passed

New version of HiGHS stalls booking_management(SDDP.LagrangianDuality())

diff --git a/previews/PR797/examples/generation_expansion/index.html b/previews/PR797/examples/generation_expansion/index.html index f3885a576..73d9af22f 100644 --- a/previews/PR797/examples/generation_expansion/index.html +++ b/previews/PR797/examples/generation_expansion/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Generation expansion

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP
+

Generation expansion

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP
 import HiGHS
 import Test
 
@@ -115,15 +115,15 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10   2.549668e+06  2.078257e+06  5.043571e-01       920   1
-        20   5.494568e+05  2.078257e+06  6.959481e-01      1340   1
-        30   4.985879e+04  2.078257e+06  1.225232e+00      2260   1
-        40   3.799447e+06  2.078257e+06  1.424117e+00      2680   1
-        50   1.049867e+06  2.078257e+06  1.979467e+00      3600   1
-        60   3.985191e+04  2.078257e+06  2.177041e+00      4020   1
+        10   2.549668e+06  2.078257e+06  5.306101e-01       920   1
+        20   5.494568e+05  2.078257e+06  7.298350e-01      1340   1
+        30   4.985879e+04  2.078257e+06  1.274781e+00      2260   1
+        40   3.799447e+06  2.078257e+06  1.478624e+00      2680   1
+        50   1.049867e+06  2.078257e+06  2.046983e+00      3600   1
+        60   3.985191e+04  2.078257e+06  2.251220e+00      4020   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 2.177041e+00
+total time (s) : 2.251220e+00
 total solves   : 4020
 best bound     :  2.078257e+06
 simulation ci  :  2.031697e+06 ± 3.922745e+05
@@ -157,17 +157,17 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10L  4.986663e+04  2.079119e+06  9.385839e-01       920   1
-        20L  3.799878e+06  2.079330e+06  1.656679e+00      1340   1
-        30L  3.003923e+04  2.079457e+06  2.762526e+00      2260   1
-        40L  5.549882e+06  2.079457e+06  3.560574e+00      2680   1
-        50L  2.799466e+06  2.079457e+06  4.713833e+00      3600   1
-        60L  3.549880e+06  2.079457e+06  5.473797e+00      4020   1
+        10L  4.986663e+04  2.079119e+06  9.832032e-01       920   1
+        20L  3.799878e+06  2.079330e+06  1.716709e+00      1340   1
+        30L  3.003923e+04  2.079457e+06  2.874528e+00      2260   1
+        40L  5.549882e+06  2.079457e+06  3.697897e+00      2680   1
+        50L  2.799466e+06  2.079457e+06  4.925736e+00      3600   1
+        60L  3.549880e+06  2.079457e+06  5.718980e+00      4020   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 5.473797e+00
+total time (s) : 5.718980e+00
 total solves   : 4020
 best bound     :  2.079457e+06
 simulation ci  :  2.352204e+06 ± 5.377531e+05
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/hydro_valley/index.html b/previews/PR797/examples/hydro_valley/index.html index 64e7b887f..37cd423ea 100644 --- a/previews/PR797/examples/hydro_valley/index.html +++ b/previews/PR797/examples/hydro_valley/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Hydro valleys

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This problem is a version of the hydro-thermal scheduling problem. The goal is to operate two hydro-dams in a valley chain over time in the face of inflow and price uncertainty.

Turbine response curves are modelled by piecewise linear functions which map the flow rate into a power. These can be controlled by specifying the breakpoints in the piecewise linear function as the knots in the Turbine struct.

The model can be created using the hydro_valley_model function. It has a few keyword arguments to allow automated testing of the library. hasstagewiseinflows determines if the RHS noise constraint should be added. hasmarkovprice determines if the price uncertainty (modelled by a Markov chain) should be added.

In the third stage, the Markov chain has some unreachable states to test some code-paths in the library.

We can also set the sense to :Min or :Max (the objective and bound are flipped appropriately).

using SDDP, HiGHS, Test, Random
+

Hydro valleys

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This problem is a version of the hydro-thermal scheduling problem. The goal is to operate two hydro-dams in a valley chain over time in the face of inflow and price uncertainty.

Turbine response curves are modelled by piecewise linear functions which map the flow rate into a power. These can be controlled by specifying the breakpoints in the piecewise linear function as the knots in the Turbine struct.

The model can be created using the hydro_valley_model function. It has a few keyword arguments to allow automated testing of the library. hasstagewiseinflows determines if the RHS noise constraint should be added. hasmarkovprice determines if the price uncertainty (modelled by a Markov chain) should be added.

In the third stage, the Markov chain has some unreachable states to test some code-paths in the library.

We can also set the sense to :Min or :Max (the objective and bound are flipped appropriately).

using SDDP, HiGHS, Test, Random
 
 struct Turbine
     flowknots::Vector{Float64}
@@ -280,4 +280,4 @@
     ###  = $835
 end
 
-test_hydro_valley_model()
Test Passed
+test_hydro_valley_model()
Test Passed
diff --git a/previews/PR797/examples/infinite_horizon_hydro_thermal/index.html b/previews/PR797/examples/infinite_horizon_hydro_thermal/index.html index 2f8b9cc5b..d8aac2ede 100644 --- a/previews/PR797/examples/infinite_horizon_hydro_thermal/index.html +++ b/previews/PR797/examples/infinite_horizon_hydro_thermal/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Infinite horizon hydro-thermal

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP, HiGHS, Test, Statistics
+

Infinite horizon hydro-thermal

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP, HiGHS, Test, Statistics
 
 function infinite_hydro_thermal(; cut_type)
     Ω = [
@@ -93,13 +93,13 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-       100   2.500000e+01  1.188965e+02  7.742610e-01      1946   1
-       200   2.500000e+01  1.191634e+02  9.744711e-01      3920   1
-       300   0.000000e+00  1.191666e+02  1.181034e+00      5902   1
-       330   2.500000e+01  1.191667e+02  1.221981e+00      6224   1
+       100   2.500000e+01  1.188965e+02  7.883129e-01      1946   1
+       200   2.500000e+01  1.191634e+02  1.003221e+00      3920   1
+       300   0.000000e+00  1.191666e+02  1.222479e+00      5902   1
+       330   2.500000e+01  1.191667e+02  1.265766e+00      6224   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 1.221981e+00
+total time (s) : 1.265766e+00
 total solves   : 6224
 best bound     :  1.191667e+02
 simulation ci  :  2.158333e+01 ± 3.290252e+00
@@ -132,16 +132,16 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-       100   0.000000e+00  1.191285e+02  3.271759e-01      2874   1
-       200   2.500000e+00  1.191666e+02  5.546119e-01      4855   1
-       282   7.500000e+00  1.191667e+02  6.863480e-01      5733   1
+       100   0.000000e+00  1.191285e+02  2.961462e-01      2874   1
+       200   2.500000e+00  1.191666e+02  5.767140e-01      4855   1
+       282   7.500000e+00  1.191667e+02  7.111061e-01      5733   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 6.863480e-01
+total time (s) : 7.111061e-01
 total solves   : 5733
 best bound     :  1.191667e+02
 simulation ci  :  2.104610e+01 ± 3.492245e+00
 numeric issues : 0
 -------------------------------------------------------------------
 
-Confidence_interval = 116.06 ± 13.65
+Confidence_interval = 116.06 ± 13.65
diff --git a/previews/PR797/examples/infinite_horizon_trivial/index.html b/previews/PR797/examples/infinite_horizon_trivial/index.html index 026c97186..cf1f7c12a 100644 --- a/previews/PR797/examples/infinite_horizon_trivial/index.html +++ b/previews/PR797/examples/infinite_horizon_trivial/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Infinite horizon trivial

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP, HiGHS, Test
+

Infinite horizon trivial

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP, HiGHS, Test
 
 function infinite_trivial()
     graph = SDDP.Graph(
@@ -49,15 +49,15 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10   4.000000e+00  1.997089e+01  6.829882e-02      1204   1
-        20   8.000000e+00  2.000000e+01  8.908892e-02      1420   1
-        30   1.600000e+01  2.000000e+01  1.558468e-01      2628   1
-        40   8.000000e+00  2.000000e+01  1.774418e-01      2834   1
+        10   4.000000e+00  1.997089e+01  6.984305e-02      1204   1
+        20   8.000000e+00  2.000000e+01  9.086013e-02      1420   1
+        30   1.600000e+01  2.000000e+01  1.610591e-01      2628   1
+        40   8.000000e+00  2.000000e+01  1.829062e-01      2834   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 1.774418e-01
+total time (s) : 1.829062e-01
 total solves   : 2834
 best bound     :  2.000000e+01
 simulation ci  :  1.625000e+01 ± 4.766381e+00
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/no_strong_duality/index.html b/previews/PR797/examples/no_strong_duality/index.html index 9e1e0a518..a060403ad 100644 --- a/previews/PR797/examples/no_strong_duality/index.html +++ b/previews/PR797/examples/no_strong_duality/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

No strong duality

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example is interesting, because strong duality doesn't hold for the extensive form (see if you can show why!), but we still converge.

using SDDP, HiGHS, Test
+

No strong duality

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example is interesting, because strong duality doesn't hold for the extensive form (see if you can show why!), but we still converge.

using SDDP, HiGHS, Test
 
 function no_strong_duality()
     model = SDDP.PolicyGraph(
@@ -48,13 +48,13 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   1.000000e+00  1.500000e+00  1.557112e-03         3   1
-        40   4.000000e+00  2.000000e+00  4.292202e-02       578   1
+         1   1.000000e+00  1.500000e+00  1.590967e-03         3   1
+        40   4.000000e+00  2.000000e+00  4.373312e-02       578   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 4.292202e-02
+total time (s) : 4.373312e-02
 total solves   : 578
 best bound     :  2.000000e+00
 simulation ci  :  1.950000e+00 ± 5.568095e-01
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/objective_state_newsvendor/index.html b/previews/PR797/examples/objective_state_newsvendor/index.html index 668ff2c58..20a1afad4 100644 --- a/previews/PR797/examples/objective_state_newsvendor/index.html +++ b/previews/PR797/examples/objective_state_newsvendor/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Newsvendor

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example is based on the classical newsvendor problem, but features an AR(1) spot-price.

   V(x[t-1], ω[t]) =         max p[t] × u[t]
+

Newsvendor

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example is based on the classical newsvendor problem, but features an AR(1) spot-price.

   V(x[t-1], ω[t]) =         max p[t] × u[t]
                       subject to x[t] = x[t-1] - u[t] + ω[t]
                                  u[t] ∈ [0, 1]
                                  x[t] ≥ 0
@@ -93,138 +93,137 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10   5.250000e+00  4.888859e+00  1.663401e-01      1350   1
-        20   4.350000e+00  4.105855e+00  2.500541e-01      2700   1
-        30   5.000000e+00  4.100490e+00  3.432181e-01      4050   1
-        40   3.500000e+00  4.097376e+00  4.441550e-01      5400   1
-        50   5.250000e+00  4.095859e+00  5.509920e-01      6750   1
-        60   3.643750e+00  4.093342e+00  6.686151e-01      8100   1
-        70   2.643750e+00  4.091818e+00  7.786582e-01      9450   1
-        80   5.087500e+00  4.091591e+00  8.907940e-01     10800   1
-        90   5.062500e+00  4.091309e+00  1.003956e+00     12150   1
-       100   4.843750e+00  4.087004e+00  1.126466e+00     13500   1
-       110   3.437500e+00  4.086094e+00  1.248292e+00     14850   1
-       120   3.375000e+00  4.085926e+00  1.371468e+00     16200   1
-       130   5.025000e+00  4.085866e+00  1.496897e+00     17550   1
-       140   5.000000e+00  4.085734e+00  1.623436e+00     18900   1
-       150   3.500000e+00  4.085655e+00  1.751765e+00     20250   1
-       160   4.281250e+00  4.085454e+00  1.876908e+00     21600   1
-       170   4.562500e+00  4.085425e+00  2.003059e+00     22950   1
-       180   5.768750e+00  4.085425e+00  2.164016e+00     24300   1
-       190   3.468750e+00  4.085359e+00  2.297447e+00     25650   1
-       200   4.131250e+00  4.085225e+00  2.429615e+00     27000   1
-       210   4.512500e+00  4.085157e+00  2.560197e+00     28350   1
-       220   4.900000e+00  4.085153e+00  2.693851e+00     29700   1
-       230   4.025000e+00  4.085134e+00  2.832126e+00     31050   1
-       240   4.468750e+00  4.085116e+00  2.971372e+00     32400   1
-       250   4.062500e+00  4.085075e+00  3.107905e+00     33750   1
-       260   4.875000e+00  4.085037e+00  3.249029e+00     35100   1
-       270   3.850000e+00  4.085011e+00  3.388852e+00     36450   1
-       280   4.912500e+00  4.084992e+00  3.530865e+00     37800   1
-       290   2.987500e+00  4.084986e+00  3.687086e+00     39150   1
-       300   3.825000e+00  4.084957e+00  3.834561e+00     40500   1
-       310   3.250000e+00  4.084911e+00  3.980833e+00     41850   1
-       320   3.600000e+00  4.084896e+00  4.126528e+00     43200   1
-       330   3.925000e+00  4.084896e+00  4.261477e+00     44550   1
-       340   4.500000e+00  4.084893e+00  4.405026e+00     45900   1
-       350   5.000000e+00  4.084891e+00  4.548161e+00     47250   1
-       360   3.075000e+00  4.084866e+00  4.690575e+00     48600   1
-       370   3.500000e+00  4.084861e+00  4.842601e+00     49950   1
-       380   3.356250e+00  4.084857e+00  4.991565e+00     51300   1
-       390   5.500000e+00  4.084846e+00  5.183973e+00     52650   1
-       400   4.475000e+00  4.084846e+00  5.330938e+00     54000   1
-       410   3.750000e+00  4.084843e+00  5.479866e+00     55350   1
-       420   3.687500e+00  4.084843e+00  5.632725e+00     56700   1
-       430   4.337500e+00  4.084825e+00  5.797897e+00     58050   1
-       440   5.750000e+00  4.084825e+00  5.937596e+00     59400   1
-       450   4.925000e+00  4.084792e+00  6.095874e+00     60750   1
-       460   3.600000e+00  4.084792e+00  6.253533e+00     62100   1
-       470   4.387500e+00  4.084792e+00  6.403327e+00     63450   1
-       480   4.000000e+00  4.084792e+00  6.562336e+00     64800   1
-       490   2.975000e+00  4.084788e+00  6.715832e+00     66150   1
-       500   3.125000e+00  4.084788e+00  6.883962e+00     67500   1
-       510   4.250000e+00  4.084788e+00  7.044633e+00     68850   1
-       520   4.512500e+00  4.084786e+00  7.196681e+00     70200   1
-       530   3.875000e+00  4.084786e+00  7.356925e+00     71550   1
-       540   4.387500e+00  4.084781e+00  7.516770e+00     72900   1
-       550   5.281250e+00  4.084780e+00  7.680792e+00     74250   1
-       560   4.650000e+00  4.084780e+00  7.839925e+00     75600   1
-       570   3.062500e+00  4.084780e+00  7.995148e+00     76950   1
-       580   3.187500e+00  4.084780e+00  8.171869e+00     78300   1
-       590   3.812500e+00  4.084780e+00  8.322004e+00     79650   1
-       600   3.637500e+00  4.084774e+00  8.480089e+00     81000   1
-       610   3.950000e+00  4.084765e+00  8.636525e+00     82350   1
-       620   4.625000e+00  4.084760e+00  8.796984e+00     83700   1
-       630   4.218750e+00  4.084760e+00  8.960174e+00     85050   1
-       640   3.025000e+00  4.084755e+00  9.120951e+00     86400   1
-       650   2.993750e+00  4.084751e+00  9.272855e+00     87750   1
-       660   3.262500e+00  4.084746e+00  9.430292e+00     89100   1
-       670   3.625000e+00  4.084746e+00  9.590352e+00     90450   1
-       680   2.981250e+00  4.084746e+00  9.752024e+00     91800   1
-       690   4.187500e+00  4.084746e+00  9.920906e+00     93150   1
-       700   4.500000e+00  4.084746e+00  1.007550e+01     94500   1
-       710   3.225000e+00  4.084746e+00  1.023228e+01     95850   1
-       720   4.375000e+00  4.084746e+00  1.039064e+01     97200   1
-       730   2.650000e+00  4.084746e+00  1.055457e+01     98550   1
-       740   3.250000e+00  4.084746e+00  1.071168e+01     99900   1
-       750   4.725000e+00  4.084746e+00  1.091026e+01    101250   1
-       760   3.375000e+00  4.084746e+00  1.108029e+01    102600   1
-       770   5.375000e+00  4.084746e+00  1.124661e+01    103950   1
-       780   4.068750e+00  4.084746e+00  1.141610e+01    105300   1
-       790   4.412500e+00  4.084746e+00  1.158874e+01    106650   1
-       800   4.350000e+00  4.084746e+00  1.175775e+01    108000   1
-       810   5.887500e+00  4.084746e+00  1.193063e+01    109350   1
-       820   4.912500e+00  4.084746e+00  1.209860e+01    110700   1
-       830   4.387500e+00  4.084746e+00  1.226029e+01    112050   1
-       840   3.675000e+00  4.084746e+00  1.245525e+01    113400   1
-       850   5.375000e+00  4.084746e+00  1.262465e+01    114750   1
-       860   3.562500e+00  4.084746e+00  1.280133e+01    116100   1
-       870   3.075000e+00  4.084746e+00  1.298335e+01    117450   1
-       880   3.625000e+00  4.084746e+00  1.315501e+01    118800   1
-       890   2.937500e+00  4.084746e+00  1.332374e+01    120150   1
-       900   4.450000e+00  4.084746e+00  1.352272e+01    121500   1
-       910   4.200000e+00  4.084746e+00  1.369764e+01    122850   1
-       920   3.687500e+00  4.084746e+00  1.387853e+01    124200   1
-       930   4.725000e+00  4.084746e+00  1.406010e+01    125550   1
-       940   4.018750e+00  4.084746e+00  1.423773e+01    126900   1
-       950   4.675000e+00  4.084746e+00  1.440701e+01    128250   1
-       960   3.375000e+00  4.084746e+00  1.457836e+01    129600   1
-       970   3.812500e+00  4.084746e+00  1.474899e+01    130950   1
-       980   3.112500e+00  4.084746e+00  1.492253e+01    132300   1
-       990   3.600000e+00  4.084746e+00  1.509909e+01    133650   1
-      1000   5.500000e+00  4.084746e+00  1.527622e+01    135000   1
-      1010   3.187500e+00  4.084746e+00  1.544734e+01    136350   1
-      1020   4.900000e+00  4.084746e+00  1.562007e+01    137700   1
-      1030   3.637500e+00  4.084746e+00  1.582686e+01    139050   1
-      1040   3.975000e+00  4.084746e+00  1.600489e+01    140400   1
-      1050   4.750000e+00  4.084746e+00  1.618961e+01    141750   1
-      1060   4.437500e+00  4.084746e+00  1.638499e+01    143100   1
-      1070   5.000000e+00  4.084746e+00  1.656761e+01    144450   1
-      1080   4.143750e+00  4.084746e+00  1.675360e+01    145800   1
-      1090   5.625000e+00  4.084746e+00  1.693228e+01    147150   1
-      1100   3.475000e+00  4.084746e+00  1.711901e+01    148500   1
-      1110   4.156250e+00  4.084746e+00  1.730887e+01    149850   1
-      1120   4.450000e+00  4.084746e+00  1.749134e+01    151200   1
-      1130   3.312500e+00  4.084741e+00  1.767779e+01    152550   1
-      1140   5.375000e+00  4.084741e+00  1.785472e+01    153900   1
-      1150   4.800000e+00  4.084737e+00  1.806527e+01    155250   1
-      1160   3.300000e+00  4.084737e+00  1.825366e+01    156600   1
-      1170   4.356250e+00  4.084737e+00  1.843901e+01    157950   1
-      1180   3.900000e+00  4.084737e+00  1.862842e+01    159300   1
-      1190   4.450000e+00  4.084737e+00  1.882290e+01    160650   1
-      1200   5.156250e+00  4.084737e+00  1.901250e+01    162000   1
-      1210   4.500000e+00  4.084737e+00  1.919030e+01    163350   1
-      1220   4.875000e+00  4.084737e+00  1.938506e+01    164700   1
-      1230   4.000000e+00  4.084737e+00  1.956429e+01    166050   1
-      1240   4.062500e+00  4.084737e+00  1.975550e+01    167400   1
-      1250   5.450000e+00  4.084737e+00  1.995034e+01    168750   1
-      1252   4.650000e+00  4.084737e+00  2.000588e+01    169020   1
+        10   5.250000e+00  4.888859e+00  1.704819e-01      1350   1
+        20   4.350000e+00  4.105855e+00  2.557840e-01      2700   1
+        30   5.000000e+00  4.100490e+00  3.514409e-01      4050   1
+        40   3.500000e+00  4.097376e+00  4.545798e-01      5400   1
+        50   5.250000e+00  4.095859e+00  5.626230e-01      6750   1
+        60   3.643750e+00  4.093342e+00  6.754730e-01      8100   1
+        70   2.643750e+00  4.091818e+00  7.879639e-01      9450   1
+        80   5.087500e+00  4.091591e+00  9.042399e-01     10800   1
+        90   5.062500e+00  4.091309e+00  1.019908e+00     12150   1
+       100   4.843750e+00  4.087004e+00  1.144455e+00     13500   1
+       110   3.437500e+00  4.086094e+00  1.268943e+00     14850   1
+       120   3.375000e+00  4.085926e+00  1.394307e+00     16200   1
+       130   5.025000e+00  4.085866e+00  1.521941e+00     17550   1
+       140   5.000000e+00  4.085734e+00  1.649412e+00     18900   1
+       150   3.500000e+00  4.085655e+00  1.778080e+00     20250   1
+       160   4.281250e+00  4.085454e+00  1.904933e+00     21600   1
+       170   4.562500e+00  4.085425e+00  2.033533e+00     22950   1
+       180   5.768750e+00  4.085425e+00  2.163414e+00     24300   1
+       190   3.468750e+00  4.085359e+00  2.299521e+00     25650   1
+       200   4.131250e+00  4.085225e+00  2.433752e+00     27000   1
+       210   4.512500e+00  4.085157e+00  2.604127e+00     28350   1
+       220   4.900000e+00  4.085153e+00  2.737455e+00     29700   1
+       230   4.025000e+00  4.085134e+00  2.875680e+00     31050   1
+       240   4.468750e+00  4.085116e+00  3.015667e+00     32400   1
+       250   4.062500e+00  4.085075e+00  3.153744e+00     33750   1
+       260   4.875000e+00  4.085037e+00  3.294495e+00     35100   1
+       270   3.850000e+00  4.085011e+00  3.434320e+00     36450   1
+       280   4.912500e+00  4.084992e+00  3.576204e+00     37800   1
+       290   2.987500e+00  4.084986e+00  3.725002e+00     39150   1
+       300   3.825000e+00  4.084957e+00  3.877516e+00     40500   1
+       310   3.250000e+00  4.084911e+00  4.027672e+00     41850   1
+       320   3.600000e+00  4.084896e+00  4.174708e+00     43200   1
+       330   3.925000e+00  4.084896e+00  4.311967e+00     44550   1
+       340   4.500000e+00  4.084893e+00  4.458920e+00     45900   1
+       350   5.000000e+00  4.084891e+00  4.605219e+00     47250   1
+       360   3.075000e+00  4.084866e+00  4.750036e+00     48600   1
+       370   3.500000e+00  4.084861e+00  4.902742e+00     49950   1
+       380   3.356250e+00  4.084857e+00  5.058502e+00     51300   1
+       390   5.500000e+00  4.084846e+00  5.217160e+00     52650   1
+       400   4.475000e+00  4.084846e+00  5.367141e+00     54000   1
+       410   3.750000e+00  4.084843e+00  5.518252e+00     55350   1
+       420   3.687500e+00  4.084843e+00  5.674711e+00     56700   1
+       430   4.337500e+00  4.084825e+00  5.869491e+00     58050   1
+       440   5.750000e+00  4.084825e+00  6.013694e+00     59400   1
+       450   4.925000e+00  4.084792e+00  6.175355e+00     60750   1
+       460   3.600000e+00  4.084792e+00  6.332937e+00     62100   1
+       470   4.387500e+00  4.084792e+00  6.485656e+00     63450   1
+       480   4.000000e+00  4.084792e+00  6.648800e+00     64800   1
+       490   2.975000e+00  4.084788e+00  6.804068e+00     66150   1
+       500   3.125000e+00  4.084788e+00  6.960602e+00     67500   1
+       510   4.250000e+00  4.084788e+00  7.128119e+00     68850   1
+       520   4.512500e+00  4.084786e+00  7.283815e+00     70200   1
+       530   3.875000e+00  4.084786e+00  7.448404e+00     71550   1
+       540   4.387500e+00  4.084781e+00  7.613383e+00     72900   1
+       550   5.281250e+00  4.084780e+00  7.778834e+00     74250   1
+       560   4.650000e+00  4.084780e+00  7.934796e+00     75600   1
+       570   3.062500e+00  4.084780e+00  8.092858e+00     76950   1
+       580   3.187500e+00  4.084780e+00  8.245184e+00     78300   1
+       590   3.812500e+00  4.084780e+00  8.395426e+00     79650   1
+       600   3.637500e+00  4.084774e+00  8.555107e+00     81000   1
+       610   3.950000e+00  4.084765e+00  8.712438e+00     82350   1
+       620   4.625000e+00  4.084760e+00  8.871296e+00     83700   1
+       630   4.218750e+00  4.084760e+00  9.063928e+00     85050   1
+       640   3.025000e+00  4.084755e+00  9.227391e+00     86400   1
+       650   2.993750e+00  4.084751e+00  9.381393e+00     87750   1
+       660   3.262500e+00  4.084746e+00  9.541476e+00     89100   1
+       670   3.625000e+00  4.084746e+00  9.705555e+00     90450   1
+       680   2.981250e+00  4.084746e+00  9.870671e+00     91800   1
+       690   4.187500e+00  4.084746e+00  1.003358e+01     93150   1
+       700   4.500000e+00  4.084746e+00  1.019379e+01     94500   1
+       710   3.225000e+00  4.084746e+00  1.035506e+01     95850   1
+       720   4.375000e+00  4.084746e+00  1.051891e+01     97200   1
+       730   2.650000e+00  4.084746e+00  1.068753e+01     98550   1
+       740   3.250000e+00  4.084746e+00  1.085118e+01     99900   1
+       750   4.725000e+00  4.084746e+00  1.102475e+01    101250   1
+       760   3.375000e+00  4.084746e+00  1.119860e+01    102600   1
+       770   5.375000e+00  4.084746e+00  1.136600e+01    103950   1
+       780   4.068750e+00  4.084746e+00  1.153917e+01    105300   1
+       790   4.412500e+00  4.084746e+00  1.171766e+01    106650   1
+       800   4.350000e+00  4.084746e+00  1.189214e+01    108000   1
+       810   5.887500e+00  4.084746e+00  1.206906e+01    109350   1
+       820   4.912500e+00  4.084746e+00  1.226712e+01    110700   1
+       830   4.387500e+00  4.084746e+00  1.243011e+01    112050   1
+       840   3.675000e+00  4.084746e+00  1.260213e+01    113400   1
+       850   5.375000e+00  4.084746e+00  1.276808e+01    114750   1
+       860   3.562500e+00  4.084746e+00  1.294599e+01    116100   1
+       870   3.075000e+00  4.084746e+00  1.312371e+01    117450   1
+       880   3.625000e+00  4.084746e+00  1.329775e+01    118800   1
+       890   2.937500e+00  4.084746e+00  1.346379e+01    120150   1
+       900   4.450000e+00  4.084746e+00  1.363870e+01    121500   1
+       910   4.200000e+00  4.084746e+00  1.381329e+01    122850   1
+       920   3.687500e+00  4.084746e+00  1.399519e+01    124200   1
+       930   4.725000e+00  4.084746e+00  1.417308e+01    125550   1
+       940   4.018750e+00  4.084746e+00  1.435487e+01    126900   1
+       950   4.675000e+00  4.084746e+00  1.452249e+01    128250   1
+       960   3.375000e+00  4.084746e+00  1.468833e+01    129600   1
+       970   3.812500e+00  4.084746e+00  1.485362e+01    130950   1
+       980   3.112500e+00  4.084746e+00  1.504885e+01    132300   1
+       990   3.600000e+00  4.084746e+00  1.522341e+01    133650   1
+      1000   5.500000e+00  4.084746e+00  1.540312e+01    135000   1
+      1010   3.187500e+00  4.084746e+00  1.557377e+01    136350   1
+      1020   4.900000e+00  4.084746e+00  1.574687e+01    137700   1
+      1030   3.637500e+00  4.084746e+00  1.593309e+01    139050   1
+      1040   3.975000e+00  4.084746e+00  1.611098e+01    140400   1
+      1050   4.750000e+00  4.084746e+00  1.629219e+01    141750   1
+      1060   4.437500e+00  4.084746e+00  1.648865e+01    143100   1
+      1070   5.000000e+00  4.084746e+00  1.667046e+01    144450   1
+      1080   4.143750e+00  4.084746e+00  1.685576e+01    145800   1
+      1090   5.625000e+00  4.084746e+00  1.703189e+01    147150   1
+      1100   3.475000e+00  4.084746e+00  1.721501e+01    148500   1
+      1110   4.156250e+00  4.084746e+00  1.742786e+01    149850   1
+      1120   4.450000e+00  4.084746e+00  1.761171e+01    151200   1
+      1130   3.312500e+00  4.084741e+00  1.779781e+01    152550   1
+      1140   5.375000e+00  4.084741e+00  1.797506e+01    153900   1
+      1150   4.800000e+00  4.084737e+00  1.816666e+01    155250   1
+      1160   3.300000e+00  4.084737e+00  1.834990e+01    156600   1
+      1170   4.356250e+00  4.084737e+00  1.853366e+01    157950   1
+      1180   3.900000e+00  4.084737e+00  1.871973e+01    159300   1
+      1190   4.450000e+00  4.084737e+00  1.890654e+01    160650   1
+      1200   5.156250e+00  4.084737e+00  1.910280e+01    162000   1
+      1210   4.500000e+00  4.084737e+00  1.928728e+01    163350   1
+      1220   4.875000e+00  4.084737e+00  1.949882e+01    164700   1
+      1230   4.000000e+00  4.084737e+00  1.970362e+01    166050   1
+      1240   4.062500e+00  4.084737e+00  1.989043e+01    167400   1
+      1246   3.000000e+00  4.084737e+00  2.000524e+01    168210   1
 -------------------------------------------------------------------
 status         : time_limit
-total time (s) : 2.000588e+01
-total solves   : 169020
+total time (s) : 2.000524e+01
+total solves   : 168210
 best bound     :  4.084737e+00
-simulation ci  :  4.071058e+00 ± 4.034930e-02
+simulation ci  :  4.071445e+00 ± 4.036229e-02
 numeric issues : 0
 -------------------------------------------------------------------
 
@@ -254,28 +253,28 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10   5.025000e+00  4.357902e+00  1.914341e-01      1350   1
-        20   4.250000e+00  4.340926e+00  5.634692e-01      2700   1
-        30   4.312500e+00  4.043498e+00  1.075576e+00      4050   1
-        40   4.525000e+00  4.041138e+00  1.722561e+00      5400   1
-        50   3.687500e+00  4.040451e+00  2.466196e+00      6750   1
-        60   2.987500e+00  4.040209e+00  3.322262e+00      8100   1
-        70   3.225000e+00  4.039112e+00  4.313395e+00      9450   1
-        80   4.500000e+00  4.039113e+00  5.376891e+00     10800   1
-        90   5.750000e+00  4.039007e+00  6.567904e+00     12150   1
-       100   3.700000e+00  4.038888e+00  7.886618e+00     13500   1
-       110   3.800000e+00  4.038857e+00  9.274528e+00     14850   1
-       120   2.687500e+00  4.038826e+00  1.073563e+01     16200   1
-       130   4.737500e+00  4.038815e+00  1.243337e+01     17550   1
-       140   4.550000e+00  4.038782e+00  1.423981e+01     18900   1
-       150   3.250000e+00  4.038775e+00  1.602451e+01     20250   1
-       160   3.062500e+00  4.038770e+00  1.800927e+01     21600   1
-       170   3.750000e+00  4.037586e+00  2.003571e+01     22950   1
+        10   4.512500e+00  4.066874e+00  1.980422e-01      1350   1
+        20   5.062500e+00  4.040569e+00  5.403211e-01      2700   1
+        30   4.968750e+00  4.039400e+00  1.060473e+00      4050   1
+        40   4.125000e+00  4.039286e+00  1.720881e+00      5400   1
+        50   3.925000e+00  4.039078e+00  2.568094e+00      6750   1
+        60   3.875000e+00  4.039004e+00  3.473380e+00      8100   1
+        70   3.918750e+00  4.039008e+00  4.585967e+00      9450   1
+        80   3.600000e+00  4.038911e+00  5.747896e+00     10800   1
+        90   4.250000e+00  4.038874e+00  7.041694e+00     12150   1
+       100   5.400000e+00  4.038820e+00  8.425379e+00     13500   1
+       110   3.000000e+00  4.038795e+00  9.923730e+00     14850   1
+       120   3.000000e+00  4.038812e+00  1.150825e+01     16200   1
+       130   2.993750e+00  4.038782e+00  1.320263e+01     17550   1
+       140   4.406250e+00  4.038770e+00  1.508397e+01     18900   1
+       150   5.625000e+00  4.038777e+00  1.698754e+01     20250   1
+       160   3.081250e+00  4.038772e+00  1.895570e+01     21600   1
+       165   5.006250e+00  4.038772e+00  2.003449e+01     22275   1
 -------------------------------------------------------------------
 status         : time_limit
-total time (s) : 2.003571e+01
-total solves   : 22950
-best bound     :  4.037586e+00
-simulation ci  :  4.072096e+00 ± 1.147962e-01
+total time (s) : 2.003449e+01
+total solves   : 22275
+best bound     :  4.038772e+00
+simulation ci  :  4.070947e+00 ± 1.188614e-01
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/sldp_example_one/index.html b/previews/PR797/examples/sldp_example_one/index.html index 43d12f5a5..c023aac93 100644 --- a/previews/PR797/examples/sldp_example_one/index.html +++ b/previews/PR797/examples/sldp_example_one/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

SLDP: example 1

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example is derived from Section 4.2 of the paper: Ahmed, S., Cabral, F. G., & da Costa, B. F. P. (2019). Stochastic Lipschitz Dynamic Programming. Optimization Online. PDF

using SDDP, HiGHS, Test
+

SLDP: example 1

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example is derived from Section 4.2 of the paper: Ahmed, S., Cabral, F. G., & da Costa, B. F. P. (2019). Stochastic Lipschitz Dynamic Programming. Optimization Online. PDF

using SDDP, HiGHS, Test
 
 function sldp_example_one()
     model = SDDP.LinearPolicyGraph(;
@@ -65,18 +65,17 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10   3.075152e+00  1.161916e+00  4.014568e-01      1680   1
-        20   2.990147e+00  1.167070e+00  4.960868e-01      2560   1
-        30   2.537098e+00  1.167299e+00  8.817201e-01      4240   1
-        40   3.173765e+00  1.167299e+00  9.791501e-01      5120   1
-        50   3.509464e+00  1.167299e+00  1.372273e+00      6800   1
-        60   4.637198e+00  1.167410e+00  1.510682e+00      7680   1
-        63   3.068220e+00  1.167410e+00  1.542017e+00      7944   1
+        10   3.426289e+00  1.163128e+00  3.929579e-01      1680   1
+        20   2.386729e+00  1.163467e+00  4.889431e-01      2560   1
+        30   3.405925e+00  1.165481e+00  8.810191e-01      4240   1
+        40   3.219206e+00  1.165481e+00  9.849341e-01      5120   1
+        50   3.074686e+00  1.165481e+00  1.385555e+00      6800   1
+        60   3.224080e+00  1.165481e+00  1.488954e+00      7680   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 1.542017e+00
-total solves   : 7944
-best bound     :  1.167410e+00
-simulation ci  :  3.215855e+00 ± 1.095737e-01
+total time (s) : 1.488954e+00
+total solves   : 7680
+best bound     :  1.165481e+00
+simulation ci  :  3.299213e+00 ± 1.277496e-01
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/sldp_example_two/index.html b/previews/PR797/examples/sldp_example_two/index.html index d049f65cd..133ef2a78 100644 --- a/previews/PR797/examples/sldp_example_two/index.html +++ b/previews/PR797/examples/sldp_example_two/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

SLDP: example 2

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example is derived from Section 4.3 of the paper: Ahmed, S., Cabral, F. G., & da Costa, B. F. P. (2019). Stochastic Lipschitz Dynamic Programming. Optimization Online. PDF

using SDDP
+

SLDP: example 2

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example is derived from Section 4.3 of the paper: Ahmed, S., Cabral, F. G., & da Costa, B. F. P. (2019). Stochastic Lipschitz Dynamic Programming. Optimization Online. PDF

using SDDP
 import HiGHS
 import Test
 
@@ -92,16 +92,16 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10  -9.800000e+01 -5.809615e+01  3.419590e-02        78   1
-        20  -4.000000e+01 -5.809615e+01  7.143688e-02       148   1
-        30  -4.000000e+01 -5.809615e+01  1.092849e-01       226   1
-        40  -4.000000e+01 -5.809615e+01  1.427970e-01       296   1
+        10  -4.000000e+01 -5.809615e+01  3.133106e-02        78   1
+        20  -4.000000e+01 -5.809615e+01  6.373596e-02       148   1
+        30  -4.700000e+01 -5.809615e+01  1.023810e-01       226   1
+        40  -4.000000e+01 -5.809615e+01  1.361670e-01       296   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 1.427970e-01
+total time (s) : 1.361670e-01
 total solves   : 296
 best bound     : -5.809615e+01
-simulation ci  : -5.508750e+01 ± 7.745664e+00
+simulation ci  : -5.188750e+01 ± 7.419070e+00
 numeric issues : 0
 -------------------------------------------------------------------
 
@@ -133,16 +133,16 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10  -4.000000e+01 -6.196125e+01  3.858209e-02       138   1
-        20  -6.300000e+01 -6.196125e+01  7.412910e-02       258   1
-        30  -4.000000e+01 -6.196125e+01  1.231642e-01       396   1
-        40  -9.800000e+01 -6.196125e+01  1.596701e-01       516   1
+        10  -4.700000e+01 -6.196125e+01  4.044700e-02       138   1
+        20  -9.800000e+01 -6.196125e+01  7.669592e-02       258   1
+        30  -7.500000e+01 -6.196125e+01  1.264119e-01       396   1
+        40  -6.300000e+01 -6.196125e+01  1.642599e-01       516   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 1.596701e-01
+total time (s) : 1.642599e-01
 total solves   : 516
 best bound     : -6.196125e+01
-simulation ci  : -5.211250e+01 ± 5.462441e+00
+simulation ci  : -5.548750e+01 ± 5.312051e+00
 numeric issues : 0
 -------------------------------------------------------------------
 
@@ -174,15 +174,15 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-        10  -4.700000e+01 -6.546793e+01  7.615209e-02       462   1
-        20  -5.600000e+01 -6.546793e+01  1.345971e-01       852   1
-        30  -8.200000e+01 -6.546793e+01  2.454309e-01      1314   1
-        40  -8.200000e+01 -6.546793e+01  3.039951e-01      1704   1
+        10  -8.200000e+01 -6.546793e+01  7.644391e-02       462   1
+        20  -7.000000e+01 -6.546793e+01  1.428950e-01       852   1
+        30  -6.300000e+01 -6.546793e+01  2.591200e-01      1314   1
+        40  -4.700000e+01 -6.546793e+01  3.199151e-01      1704   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 3.039951e-01
+total time (s) : 3.199151e-01
 total solves   : 1704
 best bound     : -6.546793e+01
-simulation ci  : -6.211250e+01 ± 5.560515e+00
+simulation ci  : -6.263750e+01 ± 5.346304e+00
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/stochastic_all_blacks/index.html b/previews/PR797/examples/stochastic_all_blacks/index.html index 1065dda49..b848ee8b6 100644 --- a/previews/PR797/examples/stochastic_all_blacks/index.html +++ b/previews/PR797/examples/stochastic_all_blacks/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Stochastic All Blacks

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP, HiGHS, Test
+

Stochastic All Blacks

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

using SDDP, HiGHS, Test
 
 function stochastic_all_blacks()
     # Number of time periods
@@ -77,13 +77,13 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1L  6.000000e+00  1.200000e+01  4.097199e-02        11   1
-        40L  6.000000e+00  8.000000e+00  4.075310e-01       602   1
+         1L  3.000000e+00  1.422222e+01  4.147816e-02        11   1
+        40L  6.000000e+00  8.000000e+00  5.456250e-01       602   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 4.075310e-01
+total time (s) : 5.456250e-01
 total solves   : 602
 best bound     :  8.000000e+00
-simulation ci  :  7.650000e+00 ± 8.140491e-01
+simulation ci  :  7.125000e+00 ± 7.499254e-01
 numeric issues : 0
--------------------------------------------------------------------
+-------------------------------------------------------------------
diff --git a/previews/PR797/examples/the_farmers_problem/index.html b/previews/PR797/examples/the_farmers_problem/index.html index b74763547..85aea5cf8 100644 --- a/previews/PR797/examples/the_farmers_problem/index.html +++ b/previews/PR797/examples/the_farmers_problem/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

The farmer's problem

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This problem is taken from Section 1.1 of the book Birge, J. R., & Louveaux, F. (2011). Introduction to Stochastic Programming. New York, NY: Springer New York. Paragraphs in quotes are taken verbatim.

Problem description

Consider a European farmer who specializes in raising wheat, corn, and sugar beets on his 500 acres of land. During the winter, [they want] to decide how much land to devote to each crop.

The farmer knows that at least 200 tons (T) of wheat and 240 T of corn are needed for cattle feed. These amounts can be raised on the farm or bought from a wholesaler. Any production in excess of the feeding requirement would be sold.

Over the last decade, mean selling prices have been $170 and $150 per ton of wheat and corn, respectively. The purchase prices are 40% more than this due to the wholesaler’s margin and transportation costs.

Another profitable crop is sugar beet, which [they expect] to sell at $36/T; however, the European Commission imposes a quota on sugar beet production. Any amount in excess of the quota can be sold only at $10/T. The farmer’s quota for next year is 6000 T."

Based on past experience, the farmer knows that the mean yield on [their] land is roughly 2.5 T, 3 T, and 20 T per acre for wheat, corn, and sugar beets, respectively.

[To introduce uncertainty,] assume some correlation among the yields of the different crops. A very simplified representation of this would be to assume that years are good, fair, or bad for all crops, resulting in above average, average, or below average yields for all crops. To fix these ideas, above and below average indicate a yield 20% above or below the mean yield.

Problem data

The area of the farm.

MAX_AREA = 500.0
500.0

There are three crops:

CROPS = [:wheat, :corn, :sugar_beet]
3-element Vector{Symbol}:
+

The farmer's problem

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This problem is taken from Section 1.1 of the book Birge, J. R., & Louveaux, F. (2011). Introduction to Stochastic Programming. New York, NY: Springer New York. Paragraphs in quotes are taken verbatim.

Problem description

Consider a European farmer who specializes in raising wheat, corn, and sugar beets on his 500 acres of land. During the winter, [they want] to decide how much land to devote to each crop.

The farmer knows that at least 200 tons (T) of wheat and 240 T of corn are needed for cattle feed. These amounts can be raised on the farm or bought from a wholesaler. Any production in excess of the feeding requirement would be sold.

Over the last decade, mean selling prices have been $170 and $150 per ton of wheat and corn, respectively. The purchase prices are 40% more than this due to the wholesaler’s margin and transportation costs.

Another profitable crop is sugar beet, which [they expect] to sell at $36/T; however, the European Commission imposes a quota on sugar beet production. Any amount in excess of the quota can be sold only at $10/T. The farmer’s quota for next year is 6000 T."

Based on past experience, the farmer knows that the mean yield on [their] land is roughly 2.5 T, 3 T, and 20 T per acre for wheat, corn, and sugar beets, respectively.

[To introduce uncertainty,] assume some correlation among the yields of the different crops. A very simplified representation of this would be to assume that years are good, fair, or bad for all crops, resulting in above average, average, or below average yields for all crops. To fix these ideas, above and below average indicate a yield 20% above or below the mean yield.

Problem data

The area of the farm.

MAX_AREA = 500.0
500.0

There are three crops:

CROPS = [:wheat, :corn, :sugar_beet]
3-element Vector{Symbol}:
  :wheat
  :corn
  :sugar_beet

Each of the crops has a different planting cost ($/acre).

PLANTING_COST = Dict(:wheat => 150.0, :corn => 230.0, :sugar_beet => 260.0)
Dict{Symbol, Float64} with 3 entries:
@@ -125,13 +125,13 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1  -9.800000e+04  4.922260e+05  8.704495e-02         6   1
-        40   1.093500e+05  1.083900e+05  1.160882e-01       240   1
+         1  -9.800000e+04  4.922260e+05  8.721399e-02         6   1
+        40   4.882000e+04  1.083900e+05  1.163750e-01       240   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 1.160882e-01
+total time (s) : 1.163750e-01
 total solves   : 240
 best bound     :  1.083900e+05
-simulation ci  :  9.763370e+04 ± 1.992771e+04
+simulation ci  :  1.002754e+05 ± 2.174010e+04
 numeric issues : 0
--------------------------------------------------------------------

Checking the policy

Birge and Louveaux report that the optimal objective value is $108,390. Check that we got the correct solution using SDDP.calculate_bound:

@assert isapprox(SDDP.calculate_bound(model), 108_390.0, atol = 0.1)
+-------------------------------------------------------------------

Checking the policy

Birge and Louveaux report that the optimal objective value is $108,390. Check that we got the correct solution using SDDP.calculate_bound:

@assert isapprox(SDDP.calculate_bound(model), 108_390.0, atol = 0.1)
diff --git a/previews/PR797/examples/vehicle_location/index.html b/previews/PR797/examples/vehicle_location/index.html index 10d9bff5d..012eb2383 100644 --- a/previews/PR797/examples/vehicle_location/index.html +++ b/previews/PR797/examples/vehicle_location/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Vehicle location

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This problem is a version of the Ambulance dispatch problem. A hospital is located at 0 on the number line that stretches from 0 to 100. Ambulance bases are located at points 20, 40, 60, 80, and 100. When not responding to a call, Ambulances must be located at a base, or the hospital. In this example there are three ambulances.

Example location:

H       B       B       B       B       B
+

Vehicle location

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This problem is a version of the Ambulance dispatch problem. A hospital is located at 0 on the number line that stretches from 0 to 100. Ambulance bases are located at points 20, 40, 60, 80, and 100. When not responding to a call, Ambulances must be located at a base, or the hospital. In this example there are three ambulances.

Example location:

H       B       B       B       B       B
 0 ---- 20 ---- 40 ---- 60 ---- 80 ---- 100

Each stage, a call comes in from somewhere on the number line. The agent must decide which ambulance to dispatch. They pay the cost of twice the driving distance. If an ambulance is not dispatched in a stage, the ambulance can be relocated to a different base in preparation for future calls. This incurs a cost of the driving distance.

using SDDP
 import HiGHS
 import Test
@@ -108,4 +108,4 @@
 end
 
 # TODO(odow): find out why this fails
-# vehicle_location_model(SDDP.ContinuousConicDuality())
vehicle_location_model (generic function with 1 method)
+# vehicle_location_model(SDDP.ContinuousConicDuality())
vehicle_location_model (generic function with 1 method)
diff --git a/previews/PR797/explanation/risk/index.html b/previews/PR797/explanation/risk/index.html index a3687ad62..b71bd8f9d 100644 --- a/previews/PR797/explanation/risk/index.html +++ b/previews/PR797/explanation/risk/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Risk aversion

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

In Introductory theory, we implemented a basic version of the SDDP algorithm. This tutorial extends that implementation to add risk-aversion.

Packages

This tutorial uses the following packages. For clarity, we call import PackageName so that we must prefix PackageName. to all functions and structs provided by that package. Everything not prefixed is either part of base Julia, or we wrote it.

import ForwardDiff
+

Risk aversion

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

In Introductory theory, we implemented a basic version of the SDDP algorithm. This tutorial extends that implementation to add risk-aversion.

Packages

This tutorial uses the following packages. For clarity, we call import PackageName so that we must prefix PackageName. to all functions and structs provided by that package. Everything not prefixed is either part of base Julia, or we wrote it.

import ForwardDiff
 import HiGHS
 import Ipopt
 import JuMP
@@ -512,18 +512,18 @@
 | | Visiting node 2
 | | | Z = [1.0, 2.0, 3.0, 4.0]
 | | | p = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333]
-| | | q = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333]
-| | | α = 5.551115123125783e-17
-| | | Adding cut : cost_to_go ≥ -5.551115123125783e-17
+| | | q = [0.3903334174068186, 0.3048332912965907, 0.3048332912965907]
+| | | α = 0.007126620949088093
+| | | Adding cut : 58.55001261102279 volume_out + cost_to_go ≥ 8782.49476503247
 | | Visiting node 1
 | | | Z = [1.0, 2.0, 3.0, 4.0]
 | | | p = [0.3333333333333333, 0.3333333333333333, 0.3333333333333333]
 | | | q = [1.0, 0.0, 0.0]
 | | | α = 1.0986122886681098
-| | | Adding cut : 100 volume_out + cost_to_go ≥ 29998.594667538695
+| | | Adding cut : 100 volume_out + cost_to_go ≥ 29998.59466753869
 | Finished iteration
 | | lower_bound = 14998.594667538693
-Upper bound = 10199.486236986007 ± 849.1503855322602

Finally, evaluate the decision rule:

evaluate_policy(
+Upper bound = 10399.47052774895 ± 860.6342743551556

Finally, evaluate the decision rule:

evaluate_policy(
     model;
     node = 1,
     incoming_state = Dict(:volume => 150.0),
@@ -536,4 +536,4 @@
   :volume_in          => 150.0
   :thermal_generation => 125.0
   :hydro_generation   => 25.0
-  :cost_to_go         => 9998.59
Info

For this trivial example, the risk-averse policy isn't very different from the policy obtained using the expectation risk-measure. If you try it on some bigger/more interesting problems, you should see the expected cost increase, and the upper tail of the policy decrease.

+ :cost_to_go => 9998.59
Info

For this trivial example, the risk-averse policy isn't very different from the policy obtained using the expectation risk-measure. If you try it on some bigger/more interesting problems, you should see the expected cost increase, and the upper tail of the policy decrease.

diff --git a/previews/PR797/explanation/theory_intro/index.html b/previews/PR797/explanation/theory_intro/index.html index 1197b45a2..15199d36f 100644 --- a/previews/PR797/explanation/theory_intro/index.html +++ b/previews/PR797/explanation/theory_intro/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Introductory theory

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

Note

This tutorial is aimed at advanced undergraduates or early-stage graduate students. You don't need prior exposure to stochastic programming! (Indeed, it may be better if you don't, because our approach is non-standard in the literature.)

This tutorial is also a living document. If parts are unclear, please open an issue so it can be improved!

This tutorial will teach you how the stochastic dual dynamic programming algorithm works by implementing a simplified version of the algorithm.

Our implementation is very much a "vanilla" version of SDDP; it doesn't have (m)any fancy computational tricks (e.g., the ones included in SDDP.jl) that you need to code a performant or stable version that will work on realistic instances. However, our simplified implementation will work on arbitrary policy graphs, including those with cycles such as infinite horizon problems!

Packages

This tutorial uses the following packages. For clarity, we call import PackageName so that we must prefix PackageName. to all functions and structs provided by that package. Everything not prefixed is either part of base Julia, or we wrote it.

import ForwardDiff
+

Introductory theory

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

Note

This tutorial is aimed at advanced undergraduates or early-stage graduate students. You don't need prior exposure to stochastic programming! (Indeed, it may be better if you don't, because our approach is non-standard in the literature.)

This tutorial is also a living document. If parts are unclear, please open an issue so it can be improved!

This tutorial will teach you how the stochastic dual dynamic programming algorithm works by implementing a simplified version of the algorithm.

Our implementation is very much a "vanilla" version of SDDP; it doesn't have (m)any fancy computational tricks (e.g., the ones included in SDDP.jl) that you need to code a performant or stable version that will work on realistic instances. However, our simplified implementation will work on arbitrary policy graphs, including those with cycles such as infinite horizon problems!

Packages

This tutorial uses the following packages. For clarity, we call import PackageName so that we must prefix PackageName. to all functions and structs provided by that package. Everything not prefixed is either part of base Julia, or we wrote it.

import ForwardDiff
 import HiGHS
 import JuMP
 import Statistics
Tip

You can follow along by installing the above packages, and copy-pasting the code we will write into a Julia REPL. Alternatively, you can download the Julia .jl file which created this tutorial from GitHub.

Preliminaries: background theory

Start this tutorial by reading An introduction to SDDP.jl, which introduces the necessary notation and vocabulary that we need for this tutorial.

Preliminaries: Kelley's cutting plane algorithm

Kelley's cutting plane algorithm is an iterative method for minimizing convex functions. Given a convex function $f(x)$, Kelley's constructs an under-approximation of the function at the minimum by a set of first-order Taylor series approximations (called cuts) constructed at a set of points $k = 1,\ldots,K$:

\[\begin{aligned} @@ -202,7 +202,7 @@ println("ω = ", sample_uncertainty(model.nodes[1].uncertainty)) end

ω = 100.0
 ω = 100.0
-ω = 0.0

It's also going to be useful to define a function that generates a random walk through the nodes of the graph:

function sample_next_node(model::PolicyGraph, current::Int)
+ω = 50.0

It's also going to be useful to define a function that generates a random walk through the nodes of the graph:

function sample_next_node(model::PolicyGraph, current::Int)
     if length(model.arcs[current]) == 0
         # No outgoing arcs!
         return nothing
@@ -275,15 +275,15 @@
     return trajectory, simulation_cost
 end
forward_pass (generic function with 2 methods)

Let's take a look at one forward pass:

trajectory, simulation_cost = forward_pass(model);
| Forward Pass
 | | Visiting node 1
-| | | ω = 100.0
+| | | ω = 0.0
 | | | x = Dict(:volume => 200.0)
-| | | x′ = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => 50.0)
 | | | C(x, u, ω) = 0.0
 | | Visiting node 2
 | | | ω = 100.0
-| | | x = Dict(:volume => 0.0)
+| | | x = Dict(:volume => 50.0)
 | | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 5000.0
+| | | C(x, u, ω) = 0.0
 | | Visiting node 3
 | | | ω = 100.0
 | | | x = Dict(:volume => 0.0)
@@ -382,15 +382,15 @@
 end
train (generic function with 1 method)

Using our model we defined earlier, we can go:

train(model; iteration_limit = 3, replications = 100)
Starting iteration 1
 | Forward Pass
 | | Visiting node 1
-| | | ω = 100.0
+| | | ω = 50.0
 | | | x = Dict(:volume => 200.0)
-| | | x′ = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => 100.0)
 | | | C(x, u, ω) = 0.0
 | | Visiting node 2
-| | | ω = 100.0
-| | | x = Dict(:volume => 0.0)
+| | | ω = 50.0
+| | | x = Dict(:volume => 100.0)
 | | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 5000.0
+| | | C(x, u, ω) = 0.0
 | | Visiting node 3
 | | | ω = 50.0
 | | | x = Dict(:volume => 0.0)
@@ -412,29 +412,29 @@
 | | | Adding cut : 150 volume_out + cost_to_go ≥ 15000
 | | Visiting node 1
 | | | Solving φ = 0.0
-| | | | V = 30000.0
-| | | | dVdx′ = Dict(:volume => -150.0)
+| | | | V = 15000.0
+| | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 50.0
-| | | | V = 22500.0
-| | | | dVdx′ = Dict(:volume => -150.0)
+| | | | V = 10000.0
+| | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 100.0
-| | | | V = 15000.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 150 volume_out + cost_to_go ≥ 22500
+| | | | V = 5000.0
+| | | | dVdx′ = Dict(:volume => -100.0)
+| | | Adding cut : 99.99999999999999 volume_out + cost_to_go ≥ 20000
 | Finished iteration
-| | lower_bound = 2500.0
+| | lower_bound = 5000.000000000002
 Starting iteration 2
 | Forward Pass
 | | Visiting node 1
-| | | ω = 0.0
+| | | ω = 100.0
 | | | x = Dict(:volume => 200.0)
-| | | x′ = Dict(:volume => 150.0)
-| | | C(x, u, ω) = 5000.0
+| | | x′ = Dict(:volume => 200.00000000000003)
+| | | C(x, u, ω) = 2500.0000000000014
 | | Visiting node 2
 | | | ω = 0.0
-| | | x = Dict(:volume => 150.0)
+| | | x = Dict(:volume => 200.00000000000003)
 | | | x′ = Dict(:volume => 100.0)
-| | | C(x, u, ω) = 10000.0
+| | | C(x, u, ω) = 4999.999999999997
 | | Visiting node 3
 | | | ω = 50.0
 | | | x = Dict(:volume => 100.0)
@@ -456,33 +456,33 @@
 | | | Adding cut : 100 volume_out + cost_to_go ≥ 12500
 | | Visiting node 1
 | | | Solving φ = 0.0
-| | | | V = 12499.999999999998
+| | | | V = 7499.999999999995
 | | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 50.0
-| | | | V = 7499.999999999998
+| | | | V = 2499.999999999996
 | | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 100.0
-| | | | V = 2499.9999999999986
-| | | | dVdx′ = Dict(:volume => -100.0)
-| | | Adding cut : 99.99999999999999 volume_out + cost_to_go ≥ 22499.999999999996
+| | | | V = 0.0
+| | | | dVdx′ = Dict(:volume => 0.0)
+| | | Adding cut : 66.66666666666666 volume_out + cost_to_go ≥ 16666.666666666664
 | Finished iteration
-| | lower_bound = 7499.999999999998
+| | lower_bound = 8333.333333333332
 Starting iteration 3
 | Forward Pass
 | | Visiting node 1
-| | | ω = 100.0
+| | | ω = 50.0
 | | | x = Dict(:volume => 200.0)
 | | | x′ = Dict(:volume => 200.0)
-| | | C(x, u, ω) = 2499.9999999999986
+| | | C(x, u, ω) = 4999.999999999998
 | | Visiting node 2
-| | | ω = 50.0
+| | | ω = 0.0
 | | | x = Dict(:volume => 200.0)
 | | | x′ = Dict(:volume => 124.99999999999997)
-| | | C(x, u, ω) = 2499.9999999999986
+| | | C(x, u, ω) = 7500.0
 | | Visiting node 3
-| | | ω = 100.0
+| | | ω = 50.0
 | | | x = Dict(:volume => 124.99999999999997)
-| | | x′ = Dict(:volume => 74.99999999999997)
+| | | x′ = Dict(:volume => 24.99999999999997)
 | | | C(x, u, ω) = 0.0
 | Backward pass
 | | Visiting node 3
@@ -512,7 +512,7 @@
 | Finished iteration
 | | lower_bound = 8333.333333333332
 Termination status: iteration limit
-Upper bound = 8750.0 ± 871.2657690992115

Success! We trained a policy for a finite horizon multistage stochastic program using stochastic dual dynamic programming.

Implementation: evaluating the policy

A final step is the ability to evaluate the policy at a given point.

function evaluate_policy(
+Upper bound = 8375.0 ± 839.7274389450255

Success! We trained a policy for a finite horizon multistage stochastic program using stochastic dual dynamic programming.

Implementation: evaluating the policy

A final step is the ability to evaluate the policy at a given point.

function evaluate_policy(
     model::PolicyGraph;
     node::Int,
     incoming_state::Dict{Symbol,Float64},
@@ -561,15 +561,15 @@
 | | | x′ = Dict(:volume => 0.0)
 | | | C(x, u, ω) = 0.0
 | | Visiting node 2
-| | | ω = 50.0
+| | | ω = 0.0
 | | | x = Dict(:volume => 0.0)
 | | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 10000.0
+| | | C(x, u, ω) = 15000.0
 | | Visiting node 3
-| | | ω = 0.0
+| | | ω = 100.0
 | | | x = Dict(:volume => 0.0)
 | | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 22500.0
+| | | C(x, u, ω) = 7500.0
 | Backward pass
 | | Visiting node 3
 | | | Solving φ = 0.0
@@ -614,50 +614,236 @@
 | | | x′ = Dict(:volume => 183.33333333333334)
 | | | C(x, u, ω) = 4166.666666666667
 | | Visiting node 2
-| | | ω = 100.0
+| | | ω = 50.0
 | | | x = Dict(:volume => 183.33333333333334)
 | | | x′ = Dict(:volume => 133.33333333333334)
-| | | C(x, u, ω) = -2.8421709430404007e-12
+| | | C(x, u, ω) = 5000.0
 | | Visiting node 3
-| | | ω = 0.0
+| | | ω = 50.0
 | | | x = Dict(:volume => 133.33333333333334)
-| | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 2499.999999999999
+| | | x′ = Dict(:volume => 33.33333333333334)
+| | | C(x, u, ω) = 0.0
+| Backward pass
+| | Visiting node 3
+| | | Solving φ = 0.0
+| | | | V = 30000.0
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 22500.0
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 100.0
+| | | | V = 15000.0
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Adding cut : 75 volume_out + cost_to_go ≥ 13750
+| | Visiting node 2
+| | | Solving φ = 0.0
+| | | | V = 16249.999999999998
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 11250.0
+| | | | dVdx′ = Dict(:volume => -75.0)
+| | | Solving φ = 100.0
+| | | | V = 7499.999999999999
+| | | | dVdx′ = Dict(:volume => -75.0)
+| | | Adding cut : 100 volume_out + cost_to_go ≥ 24999.999999999996
+| | Visiting node 1
+| | | Solving φ = 0.0
+| | | | V = 21666.666666666657
+| | | | dVdx′ = Dict(:volume => -100.0)
+| | | Solving φ = 50.0
+| | | | V = 16666.666666666664
+| | | | dVdx′ = Dict(:volume => -100.0)
+| | | Solving φ = 100.0
+| | | | V = 11666.666666666666
+| | | | dVdx′ = Dict(:volume => -100.0)
+| | | Adding cut : 99.99999999999999 volume_out + cost_to_go ≥ 35000
+| Finished iteration
+| | lower_bound = 20000.0
+Starting iteration 3
+| Forward Pass
+| | Visiting node 1
+| | | ω = 0.0
+| | | x = Dict(:volume => 200.0)
+| | | x′ = Dict(:volume => 200.0)
+| | | C(x, u, ω) = 7499.999999999998
 | | Visiting node 2
 | | | ω = 50.0
-| | | x = Dict(:volume => 0.0)
+| | | x = Dict(:volume => 200.0)
+| | | x′ = Dict(:volume => 200.0)
+| | | C(x, u, ω) = 10000.0
+| | Visiting node 3
+| | | ω = 100.0
+| | | x = Dict(:volume => 200.0)
+| | | x′ = Dict(:volume => 150.0)
+| | | C(x, u, ω) = 0.0
+| | Visiting node 2
+| | | ω = 100.0
+| | | x = Dict(:volume => 150.0)
+| | | x′ = Dict(:volume => 200.0)
+| | | C(x, u, ω) = 10000.0
+| | Visiting node 3
+| | | ω = 0.0
+| | | x = Dict(:volume => 200.0)
 | | | x′ = Dict(:volume => 50.0)
+| | | C(x, u, ω) = 0.0
+| | Visiting node 2
+| | | ω = 100.0
+| | | x = Dict(:volume => 50.0)
+| | | x′ = Dict(:volume => 150.0)
 | | | C(x, u, ω) = 15000.0
 | | Visiting node 3
+| | | ω = 100.0
+| | | x = Dict(:volume => 150.0)
+| | | x′ = Dict(:volume => 100.0)
+| | | C(x, u, ω) = 0.0
+| | Visiting node 2
+| | | ω = 100.0
+| | | x = Dict(:volume => 100.0)
+| | | x′ = Dict(:volume => 200.0)
+| | | C(x, u, ω) = 15000.0
+| | Visiting node 3
+| | | ω = 0.0
+| | | x = Dict(:volume => 200.0)
+| | | x′ = Dict(:volume => 50.0)
+| | | C(x, u, ω) = 0.0
+| | Visiting node 2
+| | | ω = 0.0
+| | | x = Dict(:volume => 50.0)
+| | | x′ = Dict(:volume => 50.0)
+| | | C(x, u, ω) = 15000.000000000004
+| | Visiting node 3
 | | | ω = 50.0
 | | | x = Dict(:volume => 50.0)
 | | | x′ = Dict(:volume => 0.0)
-| | | C(x, u, ω) = 7500.000000000001
+| | | C(x, u, ω) = 7500.0
+| | Visiting node 2
+| | | ω = 100.0
+| | | x = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => 100.0)
+| | | C(x, u, ω) = 15000.0
+| | Visiting node 3
+| | | ω = 50.0
+| | | x = Dict(:volume => 100.0)
+| | | x′ = Dict(:volume => 0.0)
+| | | C(x, u, ω) = 0.0
+| | Visiting node 2
+| | | ω = 100.0
+| | | x = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => 100.0)
+| | | C(x, u, ω) = 15000.0
+| | Visiting node 3
+| | | ω = 50.0
+| | | x = Dict(:volume => 100.0)
+| | | x′ = Dict(:volume => 0.0)
+| | | C(x, u, ω) = 0.0
+| | Visiting node 2
+| | | ω = 0.0
+| | | x = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => -0.0)
+| | | C(x, u, ω) = 15000.000000000004
+| | Visiting node 3
+| | | ω = 100.0
+| | | x = Dict(:volume => -0.0)
+| | | x′ = Dict(:volume => 0.0)
+| | | C(x, u, ω) = 7500.0
+| | Visiting node 2
+| | | ω = 100.0
+| | | x = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => 100.0)
+| | | C(x, u, ω) = 15000.0
+| | Visiting node 3
+| | | ω = 50.0
+| | | x = Dict(:volume => 100.0)
+| | | x′ = Dict(:volume => 0.0)
+| | | C(x, u, ω) = 0.0
+| | Visiting node 2
+| | | ω = 0.0
+| | | x = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => -0.0)
+| | | C(x, u, ω) = 15000.000000000004
+| | Visiting node 3
+| | | ω = 50.0
+| | | x = Dict(:volume => -0.0)
+| | | x′ = Dict(:volume => 0.0)
+| | | C(x, u, ω) = 15000.0
+| | Visiting node 2
+| | | ω = 0.0
+| | | x = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => -0.0)
+| | | C(x, u, ω) = 15000.000000000004
+| | Visiting node 3
+| | | ω = 50.0
+| | | x = Dict(:volume => -0.0)
+| | | x′ = Dict(:volume => 0.0)
+| | | C(x, u, ω) = 15000.0
+| | Visiting node 2
+| | | ω = 100.0
+| | | x = Dict(:volume => 0.0)
+| | | x′ = Dict(:volume => 100.0)
+| | | C(x, u, ω) = 15000.0
+| | Visiting node 3
+| | | ω = 0.0
+| | | x = Dict(:volume => 100.0)
+| | | x′ = Dict(:volume => 0.0)
+| | | C(x, u, ω) = 7500.0
 | Backward pass
 | | Visiting node 3
 | | | Solving φ = 0.0
+| | | | V = 40000.0
+| | | | dVdx′ = Dict(:volume => -100.0)
+| | | Solving φ = 50.0
 | | | | V = 35000.0
+| | | | dVdx′ = Dict(:volume => -100.0)
+| | | Solving φ = 100.0
+| | | | V = 29999.999999999996
+| | | | dVdx′ = Dict(:volume => -100.0)
+| | | Adding cut : 49.99999999999999 volume_out + cost_to_go ≥ 17500
+| | Visiting node 2
+| | | Solving φ = 0.0
+| | | | V = 25000.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 27500.0
+| | | | V = 17500.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 20000.0
-| | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 75 volume_out + cost_to_go ≥ 13750
+| | | | V = 15000.0
+| | | | dVdx′ = Dict(:volume => -49.99999999999999)
+| | | Adding cut : 116.66666666666666 volume_out + cost_to_go ≥ 30833.33333333333
+| | Visiting node 3
+| | | Solving φ = 0.0
+| | | | V = 45833.33333333333
+| | | | dVdx′ = Dict(:volume => -116.66666666666666)
+| | | Solving φ = 50.0
+| | | | V = 40000.0
+| | | | dVdx′ = Dict(:volume => -116.66666666666666)
+| | | Solving φ = 100.0
+| | | | V = 34166.666666666664
+| | | | dVdx′ = Dict(:volume => -116.66666666666666)
+| | | Adding cut : 58.33333333333333 volume_out + cost_to_go ≥ 20000
 | | Visiting node 2
 | | | Solving φ = 0.0
-| | | | V = 28750.0
+| | | | V = 42500.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 21250.0
+| | | | V = 35000.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 13750.0
+| | | | V = 27500.0
 | | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 150 volume_out + cost_to_go ≥ 28749.999999999996
+| | | Adding cut : 150 volume_out + cost_to_go ≥ 35000
 | | Visiting node 3
 | | | Solving φ = 0.0
+| | | | V = 50000.0
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 42500.0
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 100.0
+| | | | V = 35000.0
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Adding cut : 75 volume_out + cost_to_go ≥ 21249.999999999996
+| | Visiting node 2
+| | | Solving φ = 0.0
 | | | | V = 43750.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
@@ -666,86 +852,220 @@
 | | | Solving φ = 100.0
 | | | | V = 28749.999999999996
 | | | | dVdx′ = Dict(:volume => -150.0)
-| | | Adding cut : 75 volume_out + cost_to_go ≥ 18125
+| | | Adding cut : 150 volume_out + cost_to_go ≥ 36250
+| | Visiting node 3
+| | | Solving φ = 0.0
+| | | | V = 51250.0
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 43750.0
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 100.0
+| | | | V = 36250.0
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Adding cut : 75 volume_out + cost_to_go ≥ 21875
 | | Visiting node 2
 | | | Solving φ = 0.0
-| | | | V = 20625.0
+| | | | V = 29375.0
 | | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 15625.0
-| | | | dVdx′ = Dict(:volume => -75.0)
+| | | | V = 21875.0
+| | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 100.0
-| | | | V = 11875.0
+| | | | V = 18125.0
 | | | | dVdx′ = Dict(:volume => -75.0)
-| | | Adding cut : 100 volume_out + cost_to_go ≥ 29375
-| | Visiting node 1
+| | | Adding cut : 125 volume_out + cost_to_go ≥ 35625
+| | Visiting node 3
 | | | Solving φ = 0.0
-| | | | V = 26041.666666666664
-| | | | dVdx′ = Dict(:volume => -100.0)
+| | | | V = 51250.0
+| | | | dVdx′ = Dict(:volume => -150.0)
 | | | Solving φ = 50.0
-| | | | V = 21041.666666666664
-| | | | dVdx′ = Dict(:volume => -100.0)
+| | | | V = 44375.0
+| | | | dVdx′ = Dict(:volume => -125.0)
 | | | Solving φ = 100.0
-| | | | V = 16041.666666666666
-| | | | dVdx′ = Dict(:volume => -100.0)
-| | | Adding cut : 99.99999999999999 volume_out + cost_to_go ≥ 39375
-| Finished iteration
-| | lower_bound = 24375.000000000004
-Starting iteration 3
-| Forward Pass
-| | Visiting node 1
-| | | ω = 100.0
-| | | x = Dict(:volume => 200.0)
-| | | x′ = Dict(:volume => 200.0)
-| | | C(x, u, ω) = 2500.0
+| | | | V = 38125.0
+| | | | dVdx′ = Dict(:volume => -125.0)
+| | | Adding cut : 66.66666666666666 volume_out + cost_to_go ≥ 22291.666666666664
 | | Visiting node 2
-| | | ω = 0.0
-| | | x = Dict(:volume => 200.0)
-| | | x′ = Dict(:volume => 200.0)
-| | | C(x, u, ω) = 15000.0
+| | | Solving φ = 0.0
+| | | | V = 44791.666666666664
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 37291.666666666664
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 100.0
+| | | | V = 29791.666666666664
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Adding cut : 150 volume_out + cost_to_go ≥ 37291.666666666664
 | | Visiting node 3
-| | | ω = 50.0
-| | | x = Dict(:volume => 200.0)
-| | | x′ = Dict(:volume => 100.0)
-| | | C(x, u, ω) = 0.0
-| Backward pass
+| | | Solving φ = 0.0
+| | | | V = 52291.666666666664
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 44791.666666666664
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 100.0
+| | | | V = 38125.0
+| | | | dVdx′ = Dict(:volume => -125.0)
+| | | Adding cut : 70.83333333333333 volume_out + cost_to_go ≥ 22534.72222222222
+| | Visiting node 2
+| | | Solving φ = 0.0
+| | | | V = 30034.72222222222
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 22534.72222222222
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 100.0
+| | | | V = 18993.055555555555
+| | | | dVdx′ = Dict(:volume => -70.83333333333333)
+| | | Adding cut : 123.61111111111111 volume_out + cost_to_go ≥ 36215.277777777774
+| | Visiting node 3
+| | | Solving φ = 0.0
+| | | | V = 52291.666666666664
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 45034.72222222222
+| | | | dVdx′ = Dict(:volume => -123.61111111111111)
+| | | Solving φ = 100.0
+| | | | V = 38854.166666666664
+| | | | dVdx′ = Dict(:volume => -123.61111111111111)
+| | | Adding cut : 66.2037037037037 volume_out + cost_to_go ≥ 22696.759259259255
+| | Visiting node 2
+| | | Solving φ = 0.0
+| | | | V = 30196.759259259255
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 22696.759259259255
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 100.0
+| | | | V = 19386.57407407407
+| | | | dVdx′ = Dict(:volume => -66.2037037037037)
+| | | Adding cut : 122.0679012345679 volume_out + cost_to_go ≥ 36300.15432098765
 | | Visiting node 3
 | | | Solving φ = 0.0
-| | | | V = 34375.0
+| | | | V = 52291.666666666664
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 45196.759259259255
+| | | | dVdx′ = Dict(:volume => -122.0679012345679)
+| | | Solving φ = 100.0
+| | | | V = 39093.36419753086
+| | | | dVdx′ = Dict(:volume => -122.0679012345679)
+| | | Adding cut : 65.68930041152262 volume_out + cost_to_go ≥ 22763.631687242792
+| | Visiting node 2
+| | | Solving φ = 0.0
+| | | | V = 37763.63168724279
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 30263.631687242792
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 100.0
+| | | | V = 22763.631687242792
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Adding cut : 150 volume_out + cost_to_go ≥ 37763.63168724279
+| | Visiting node 3
+| | | Solving φ = 0.0
+| | | | V = 45263.631687242785
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 39093.36419753086
+| | | | dVdx′ = Dict(:volume => -122.0679012345679)
+| | | Solving φ = 100.0
+| | | | V = 32989.96913580246
+| | | | dVdx′ = Dict(:volume => -122.0679012345679)
+| | | Adding cut : 65.68930041152262 volume_out + cost_to_go ≥ 22842.292524005483
+| | Visiting node 2
+| | | Solving φ = 0.0
+| | | | V = 19557.82750342935
+| | | | dVdx′ = Dict(:volume => -65.68930041152262)
+| | | Solving φ = 50.0
+| | | | V = 16273.36248285322
+| | | | dVdx′ = Dict(:volume => -65.68930041152262)
+| | | Solving φ = 100.0
+| | | | V = 12988.89746227709
+| | | | dVdx′ = Dict(:volume => -65.68930041152262)
+| | | Adding cut : 65.68930041152262 volume_out + cost_to_go ≥ 29411.222565157746
+| | Visiting node 3
+| | | Solving φ = 0.0
+| | | | V = 39093.36419753086
+| | | | dVdx′ = Dict(:volume => -122.0679012345679)
+| | | Solving φ = 50.0
+| | | | V = 33603.665522400945
+| | | | dVdx′ = Dict(:volume => -100.00000000000001)
+| | | Solving φ = 100.0
+| | | | V = 28603.665522400945
+| | | | dVdx′ = Dict(:volume => -100.00000000000001)
+| | | Adding cut : 53.677983539094654 volume_out + cost_to_go ≥ 22251.247560964923
+| | Visiting node 2
+| | | Solving φ = 0.0
+| | | | V = 22842.292524005483
+| | | | dVdx′ = Dict(:volume => -65.68930041152262)
+| | | Solving φ = 50.0
+| | | | V = 19567.34838401019
+| | | | dVdx′ = Dict(:volume => -53.677983539094654)
+| | | Solving φ = 100.0
+| | | | V = 16883.449207055455
+| | | | dVdx′ = Dict(:volume => -53.677983539094654)
+| | | Adding cut : 57.68175582990397 volume_out + cost_to_go ≥ 28416.62674617597
+| | Visiting node 3
+| | | Solving φ = 0.0
+| | | | V = 45263.631687242785
+| | | | dVdx′ = Dict(:volume => -150.0)
+| | | Solving φ = 50.0
+| | | | V = 39093.36419753085
+| | | | dVdx′ = Dict(:volume => -122.0679012345679)
+| | | Solving φ = 100.0
+| | | | V = 33603.665522400945
+| | | | dVdx′ = Dict(:volume => -100.0)
+| | | Adding cut : 62.011316872427976 volume_out + cost_to_go ≥ 22760.676078150493
+| | Visiting node 2
+| | | Solving φ = 0.0
+| | | | V = 19660.110234529096
+| | | | dVdx′ = Dict(:volume => -62.011316872427976)
+| | | Solving φ = 50.0
+| | | | V = 16883.449207055455
+| | | | dVdx′ = Dict(:volume => -53.677983539094654)
+| | | Solving φ = 100.0
+| | | | V = 14199.550030100723
+| | | | dVdx′ = Dict(:volume => -53.677983539094654)
+| | | Adding cut : 56.45576131687242 volume_out + cost_to_go ≥ 28205.522087269575
+| | Visiting node 3
+| | | Solving φ = 0.0
+| | | | V = 33603.665522400945
 | | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 50.0
-| | | | V = 29375.0
+| | | | V = 28603.665522400945
 | | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 100.0
-| | | | V = 24375.0
+| | | | V = 23603.665522400945
 | | | | dVdx′ = Dict(:volume => -100.0)
-| | | Adding cut : 49.99999999999999 volume_out + cost_to_go ≥ 19687.5
+| | | Adding cut : 49.99999999999999 volume_out + cost_to_go ≥ 21801.832761200472
 | | Visiting node 2
 | | | Solving φ = 0.0
-| | | | V = 17187.5
-| | | | dVdx′ = Dict(:volume => -49.99999999999999)
+| | | | V = 19660.110234529096
+| | | | dVdx′ = Dict(:volume => -62.011316872427976)
 | | | Solving φ = 50.0
-| | | | V = 14687.5
-| | | | dVdx′ = Dict(:volume => -49.99999999999999)
+| | | | V = 16883.449207055455
+| | | | dVdx′ = Dict(:volume => -53.677983539094654)
 | | | Solving φ = 100.0
-| | | | V = 12187.5
+| | | | V = 14301.832761200472
 | | | | dVdx′ = Dict(:volume => -49.99999999999999)
-| | | Adding cut : 49.99999999999999 volume_out + cost_to_go ≥ 24687.499999999996
+| | | Adding cut : 55.22976680384087 volume_out + cost_to_go ≥ 27994.41742836318
 | | Visiting node 1
 | | | Solving φ = 0.0
-| | | | V = 24375.0
+| | | | V = 28603.665522400945
 | | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 50.0
-| | | | V = 19687.499999999996
-| | | | dVdx′ = Dict(:volume => -49.999999999999986)
+| | | | V = 23603.665522400945
+| | | | dVdx′ = Dict(:volume => -100.0)
 | | | Solving φ = 100.0
-| | | | V = 17187.499999999996
-| | | | dVdx′ = Dict(:volume => -49.999999999999986)
-| | | Adding cut : 66.66666666666664 volume_out + cost_to_go ≥ 33749.99999999999
+| | | | V = 19764.363371690368
+| | | | dVdx′ = Dict(:volume => -57.68175582990398)
+| | | Adding cut : 85.89391860996798 volume_out + cost_to_go ≥ 41169.34852749102
 | Finished iteration
-| | lower_bound = 25416.66666666666
+| | lower_bound = 28990.56480549742
 Termination status: iteration limit
-Upper bound = 29521.875 ± 6454.1357826769545

Success! We trained a policy for an infinite horizon multistage stochastic program using stochastic dual dynamic programming. Note how some of the forward passes are different lengths!

evaluate_policy(
+Upper bound = 33033.000684306564 ± 7619.076388304794

Success! We trained a policy for an infinite horizon multistage stochastic program using stochastic dual dynamic programming. Note how some of the forward passes are different lengths!

evaluate_policy(
     model;
     node = 3,
     incoming_state = Dict(:volume => 100.0),
@@ -758,4 +1078,4 @@
   :volume_in          => 100.0
   :thermal_generation => 40.0
   :hydro_generation   => 110.0
-  :cost_to_go         => 19687.5
+ :cost_to_go => 22842.3 diff --git a/previews/PR797/guides/access_previous_variables/index.html b/previews/PR797/guides/access_previous_variables/index.html index 41a5ad063..fcb427f7b 100644 --- a/previews/PR797/guides/access_previous_variables/index.html +++ b/previews/PR797/guides/access_previous_variables/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Access variables from a previous stage

A common question is "how do I use a variable from a previous stage in a constraint?"

Info

If you want to use a variable from a previous stage, it must be a state variable.

Here are some examples:

Access a first-stage decision in a future stage

This is often useful if your first-stage decisions are capacity-expansion type decisions (e.g., you choose first how much capacity to add, but because it takes time to build, it only shows up in some future stage).

julia> using SDDP, HiGHS
julia> SDDP.LinearPolicyGraph( +

Access variables from a previous stage

A common question is "how do I use a variable from a previous stage in a constraint?"

Info

If you want to use a variable from a previous stage, it must be a state variable.

Here are some examples:

Access a first-stage decision in a future stage

This is often useful if your first-stage decisions are capacity-expansion type decisions (e.g., you choose first how much capacity to add, but because it takes time to build, it only shows up in some future stage).

julia> using SDDP, HiGHS
julia> SDDP.LinearPolicyGraph( stages = 10, sense = :Max, upper_bound = 100.0, @@ -36,7 +36,7 @@ end end endA policy graph with 10 nodes. - Node indices: 1, ..., 10

Access a decision from N stages ago

This is often useful if have some inventory problem with a lead-time on orders.

julia> using SDDP, HiGHS
julia> SDDP.LinearPolicyGraph( + Node indices: 1, ..., 10

Access a decision from N stages ago

This is often useful if you have some inventory problem with a lead time on orders. In the code below, we assume that the product has a lead time of 5 stages, and we use a state variable to track the decisions on the production for the last 5 stages. The decisions are passed to the next stage by shifting them by one stage.

julia> using SDDP, HiGHS
julia> SDDP.LinearPolicyGraph( stages = 10, sense = :Max, upper_bound = 100, @@ -63,4 +63,39 @@ # Maximize quantity of sold items. @stageobjective(sp, sell) endA policy graph with 10 nodes. - Node indices: 1, ..., 10
Warning

You must initialize the same number of state variables in every stage, even if they are not used in that stage.

+ Node indices: 1, ..., 10
Warning

You must initialize the same number of state variables in every stage, even if they are not used in that stage.

Stochastic lead times

Stochastic lead times can be modeled by adding stochasticity to the pipeline balance constraint.

The trick is to use the random variable $\omega$ to represent the lead time, together with JuMP.set_normalized_coefficient to add u_buy to the i pipeline balance constraint when $\omega$ is equal to i. For example, if $\omega = 2$ and T = 4, we would have constraints:

c_pipeline[1], x_pipeline[1].out == x_pipeline[2].in + 0 * u_buy
+c_pipeline[2], x_pipeline[2].out == x_pipeline[3].in + 1 * u_buy
+c_pipeline[3], x_pipeline[3].out == x_pipeline[4].in + 0 * u_buy
+c_pipeline[4], x_pipeline[4].out == x_pipeline[5].in + 0 * u_buy
julia> using SDDP
julia> import HiGHS
julia> T = 1010
julia> model = SDDP.LinearPolicyGraph( + stages = 20, + sense = :Max, + upper_bound = 1000, + optimizer = HiGHS.Optimizer, + ) do sp, t + @variables(sp, begin + x_inventory >= 0, SDDP.State, (initial_value = 0) + x_pipeline[1:T+1], SDDP.State, (initial_value = 0) + 0 <= u_buy <= 10 + u_sell >= 0 + end) + fix(x_pipeline[T+1].out, 0) + @stageobjective(sp, u_sell) + @constraints(sp, begin + # Shift the orders one stage + c_pipeline[i=1:T], x_pipeline[i].out == x_pipeline[i+1].in + 1 * u_buy + # x_pipeline[1].in are arriving on the inventory + x_inventory.out == x_inventory.in - u_sell + x_pipeline[1].in + end) + SDDP.parameterize(sp, 1:T) do ω + # Rewrite the constraint c_pipeline[i=1:T] indicating how many stages + # ahead the order will arrive (ω) + # if ω == i: + # x_pipeline[i+1].in + 1 * u_buy == x_pipeline[i].out + # else: + # x_pipeline[i+1].in + 0 * u_buy == x_pipeline[i].out + for i in 1:T + set_normalized_coefficient(c_pipeline[i], u_buy, ω == i ? 1 : 0) + end + end + endA policy graph with 20 nodes. + Node indices: 1, ..., 20
diff --git a/previews/PR797/guides/add_a_multidimensional_state_variable/index.html b/previews/PR797/guides/add_a_multidimensional_state_variable/index.html index bc8df65ae..b6fc2ffb1 100644 --- a/previews/PR797/guides/add_a_multidimensional_state_variable/index.html +++ b/previews/PR797/guides/add_a_multidimensional_state_variable/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Add a multi-dimensional state variable

Just like normal JuMP variables, it is possible to create containers of state variables.

julia> model = SDDP.LinearPolicyGraph(
+

Add a multi-dimensional state variable

Just like normal JuMP variables, it is possible to create containers of state variables.

julia> model = SDDP.LinearPolicyGraph(
            stages=1, lower_bound = 0, optimizer = HiGHS.Optimizer
        ) do subproblem, t
            # A scalar state variable.
@@ -19,4 +19,4 @@
        end;
 Lower bound of outgoing x is: 0.0
 Lower bound of outgoing y[1] is: 1.0
-Lower bound of outgoing z[3, :B] is: 3.0
+Lower bound of outgoing z[3, :B] is: 3.0
diff --git a/previews/PR797/guides/add_a_risk_measure/index.html b/previews/PR797/guides/add_a_risk_measure/index.html index 7cdfb0b3e..d2e18e676 100644 --- a/previews/PR797/guides/add_a_risk_measure/index.html +++ b/previews/PR797/guides/add_a_risk_measure/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Add a risk measure

Training a risk-averse model

SDDP.jl supports a variety of risk measures. Two common ones are SDDP.Expectation and SDDP.WorstCase. Let's see how to train a policy using them. There are three possible ways.

If the same risk measure is used at every node in the policy graph, we can just pass an instance of one of the risk measures to the risk_measure keyword argument of the SDDP.train function.

SDDP.train(
+

Add a risk measure

Training a risk-averse model

SDDP.jl supports a variety of risk measures. Two common ones are SDDP.Expectation and SDDP.WorstCase. Let's see how to train a policy using them. There are three possible ways.

If the same risk measure is used at every node in the policy graph, we can just pass an instance of one of the risk measures to the risk_measure keyword argument of the SDDP.train function.

SDDP.train(
     model,
     risk_measure = SDDP.WorstCase(),
     iteration_limit = 10
@@ -40,7 +40,7 @@
  0.0
  0.0
  0.0
- 0.0

Expectation

SDDP.ExpectationType
Expectation()

The Expectation risk measure. Identical to taking the expectation with respect to the nominal distribution.

source
julia> using SDDP
julia> SDDP.adjust_probability( + 0.0

Expectation

SDDP.ExpectationType
Expectation()

The Expectation risk measure. Identical to taking the expectation with respect to the nominal distribution.

source
julia> using SDDP
julia> SDDP.adjust_probability( SDDP.Expectation(), risk_adjusted_probability, nominal_probability, @@ -51,7 +51,7 @@ 0.1 0.2 0.3 - 0.4

SDDP.Expectation is the default risk measure in SDDP.jl.

Worst-case

SDDP.WorstCaseType
WorstCase()

The worst-case risk measure. Places all of the probability weight on the worst outcome.

source
julia> SDDP.adjust_probability(
+ 0.4

SDDP.Expectation is the default risk measure in SDDP.jl.

Worst-case

SDDP.WorstCaseType
WorstCase()

The worst-case risk measure. Places all of the probability weight on the worst outcome.

source
julia> SDDP.adjust_probability(
            SDDP.WorstCase(),
            risk_adjusted_probability,
            nominal_probability,
@@ -62,7 +62,7 @@
  0.0
  0.0
  1.0
- 0.0

Average value at risk (AV@R)

SDDP.AVaRType
AVaR(β)

The average value at risk (AV@R) risk measure.

Computes the expectation of the β fraction of worst outcomes. β must be in [0, 1]. When β=1, this is equivalent to the Expectation risk measure. When β=0, this is equivalent to the WorstCase risk measure.

AV@R is also known as the conditional value at risk (CV@R) or expected shortfall.

source
julia> SDDP.adjust_probability(
+ 0.0

Average value at risk (AV@R)

SDDP.AVaRType
AVaR(β)

The average value at risk (AV@R) risk measure.

Computes the expectation of the β fraction of worst outcomes. β must be in [0, 1]. When β=1, this is equivalent to the Expectation risk measure. When β=0, this is equivalent to the WorstCase risk measure.

AV@R is also known as the conditional value at risk (CV@R) or expected shortfall.

source
julia> SDDP.adjust_probability(
            SDDP.AVaR(0.5),
            risk_adjusted_probability,
            nominal_probability,
@@ -84,10 +84,10 @@
  0.05
  0.1
  0.65
- 0.2

As a special case, the SDDP.EAVaR risk-measure is a convex combination of SDDP.Expectation and SDDP.AVaR:

julia> SDDP.EAVaR(beta=0.25, lambda=0.4)A convex combination of 0.4 * SDDP.Expectation() + 0.6 * SDDP.AVaR(0.25)
SDDP.EAVaRFunction
EAVaR(;lambda=1.0, beta=1.0)

A risk measure that is a convex combination of Expectation and Average Value @ Risk (also called Conditional Value @ Risk).

    λ * E[x] + (1 - λ) * AV@R(β)[x]

Keyword Arguments

  • lambda: Convex weight on the expectation ((1-lambda) weight is put on the AV@R component. Inreasing values of lambda are less risk averse (more weight on expectation).

  • beta: The quantile at which to calculate the Average Value @ Risk. Increasing values of beta are less risk averse. If beta=0, then the AV@R component is the worst case risk measure.

source

Distributionally robust

SDDP.jl supports two types of distributionally robust risk measures: the modified Χ² method of Philpott et al. (2018), and a method based on the Wasserstein distance metric.

Modified Chi-squard

SDDP.ModifiedChiSquaredType
ModifiedChiSquared(radius::Float64; minimum_std=1e-5)

The distributionally robust SDDP risk measure of Philpott, A., de Matos, V., Kapelevich, L. Distributionally robust SDDP. Computational Management Science (2018) 165:431-454.

Explanation

In a Distributionally Robust Optimization (DRO) approach, we modify the probabilities we associate with all future scenarios so that the resulting probability distribution is the "worst case" probability distribution, in some sense.

In each backward pass we will compute a worst case probability distribution vector p. We compute p so that:

p ∈ argmax p'z
+ 0.2

As a special case, the SDDP.EAVaR risk-measure is a convex combination of SDDP.Expectation and SDDP.AVaR:

julia> SDDP.EAVaR(beta=0.25, lambda=0.4)A convex combination of 0.4 * SDDP.Expectation() + 0.6 * SDDP.AVaR(0.25)
SDDP.EAVaRFunction
EAVaR(;lambda=1.0, beta=1.0)

A risk measure that is a convex combination of Expectation and Average Value @ Risk (also called Conditional Value @ Risk).

    λ * E[x] + (1 - λ) * AV@R(β)[x]

Keyword Arguments

  • lambda: Convex weight on the expectation ((1-lambda) weight is put on the AV@R component. Inreasing values of lambda are less risk averse (more weight on expectation).

  • beta: The quantile at which to calculate the Average Value @ Risk. Increasing values of beta are less risk averse. If beta=0, then the AV@R component is the worst case risk measure.

source

Distributionally robust

SDDP.jl supports two types of distributionally robust risk measures: the modified Χ² method of Philpott et al. (2018), and a method based on the Wasserstein distance metric.

Modified Chi-squard

SDDP.ModifiedChiSquaredType
ModifiedChiSquared(radius::Float64; minimum_std=1e-5)

The distributionally robust SDDP risk measure of Philpott, A., de Matos, V., Kapelevich, L. Distributionally robust SDDP. Computational Management Science (2018) 165:431-454.

Explanation

In a Distributionally Robust Optimization (DRO) approach, we modify the probabilities we associate with all future scenarios so that the resulting probability distribution is the "worst case" probability distribution, in some sense.

In each backward pass we will compute a worst case probability distribution vector p. We compute p so that:

p ∈ argmax p'z
       s.t. [r; p - a] in SecondOrderCone()
            sum(p) == 1
-           p >= 0

where

  1. z is a vector of future costs. We assume that our aim is to minimize future cost p'z. If we maximize reward, we would have p ∈ argmin{p'z}.
  2. a is the uniform distribution
  3. r is a user specified radius - the larger the radius, the more conservative the policy.

Notes

The largest radius that will work with S scenarios is sqrt((S-1)/S).

If the uncorrected standard deviation of the objecive realizations is less than minimum_std, then the risk-measure will default to Expectation().

This code was contributed by Lea Kapelevich.

source
julia> SDDP.adjust_probability(
+           p >= 0

where

  1. z is a vector of future costs. We assume that our aim is to minimize future cost p'z. If we maximize reward, we would have p ∈ argmin{p'z}.
  2. a is the uniform distribution
  3. r is a user specified radius - the larger the radius, the more conservative the policy.

Notes

The largest radius that will work with S scenarios is sqrt((S-1)/S).

If the uncorrected standard deviation of the objecive realizations is less than minimum_std, then the risk-measure will default to Expectation().

This code was contributed by Lea Kapelevich.

source
julia> SDDP.adjust_probability(
            SDDP.ModifiedChiSquared(0.5),
            risk_adjusted_probability,
            [0.25, 0.25, 0.25, 0.25],
@@ -98,7 +98,7 @@
  0.3333333333333333
  0.044658198738520394
  0.6220084679281462
- 0.0

Wasserstein

SDDP.WassersteinType
Wasserstein(norm::Function, solver_factory; alpha::Float64)

A distributionally-robust risk measure based on the Wasserstein distance.

As alpha increases, the measure becomes more risk-averse. When alpha=0, the measure is equivalent to the expectation operator. As alpha increases, the measure approaches the Worst-case risk measure.

source
julia> import HiGHS
julia> SDDP.adjust_probability( + 0.0

Wasserstein

SDDP.WassersteinType
Wasserstein(norm::Function, solver_factory; alpha::Float64)

A distributionally-robust risk measure based on the Wasserstein distance.

As alpha increases, the measure becomes more risk-averse. When alpha=0, the measure is equivalent to the expectation operator. As alpha increases, the measure approaches the Worst-case risk measure.

source
julia> import HiGHS
julia> SDDP.adjust_probability( SDDP.Wasserstein(HiGHS.Optimizer; alpha=0.5) do x, y return abs(x - y) end, @@ -113,7 +113,7 @@ 0.7999999999999999 -0.0

Entropic

SDDP.EntropicType
Entropic(γ::Float64)

The entropic risk measure as described by:

Dowson, O., Morton, D.P. & Pagnoncelli, B.K. Incorporating convex risk
 measures into multistage stochastic programming algorithms. Annals of
-Operations Research (2022). [doi](https://doi.org/10.1007/s10479-022-04977-w).

As γ increases, the measure becomes more risk-averse.

source
julia> SDDP.adjust_probability(
+Operations Research (2022). [doi](https://doi.org/10.1007/s10479-022-04977-w).

As γ increases, the measure becomes more risk-averse.

source
julia> SDDP.adjust_probability(
            SDDP.Entropic(0.1),
            risk_adjusted_probability,
            nominal_probability,
@@ -124,4 +124,4 @@
  0.1100296362588547
  0.19911786395979578
  0.3648046623591841
- 0.3260478374221655
+ 0.3260478374221655 diff --git a/previews/PR797/guides/add_integrality/index.html b/previews/PR797/guides/add_integrality/index.html index 36e3748ff..e5d6d054d 100644 --- a/previews/PR797/guides/add_integrality/index.html +++ b/previews/PR797/guides/add_integrality/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Integrality

There's nothing special about binary and integer variables in SDDP.jl. Your models may contain a mix of binary, integer, or continuous state and control variables. Use the standard JuMP syntax to add binary or integer variables.

For example:

using SDDP, HiGHS
+

Integrality

There's nothing special about binary and integer variables in SDDP.jl. Your models may contain a mix of binary, integer, or continuous state and control variables. Use the standard JuMP syntax to add binary or integer variables.

For example:

using SDDP, HiGHS
 model = SDDP.LinearPolicyGraph(
    stages = 3,
    lower_bound = 0.0,
@@ -25,4 +25,4 @@
 \max\limits_{\lambda}\min\limits_{\bar{x}, x^\prime, u} \;\; & C_i(\bar{x}, u, \omega) + \mathbb{E}_{j \in i^+, \varphi \in \Omega_j}[V_j(x^\prime, \varphi)] - \lambda^\top(\bar{x} - x)\\
 & x^\prime = T_i(\bar{x}, u, \omega) \\
 & u \in U_i(\bar{x}, \omega)
-\end{aligned}\]

You can use Lagrangian duality in SDDP.jl by passing SDDP.LagrangianDuality to the duality_handler argument of SDDP.train.

Compared with linear programming duality, the Lagrangian problem is difficult to solve because it requires the solution of many mixed-integer programs instead of a single linear program. This is one reason why "SDDiP" has poor performance.

Convergence

The second part to SDDiP is a very tightly scoped claim: if all of the state variables are binary and the algorithm uses Lagrangian duality to compute a subgradient, then it will converge to an optimal policy.

In many cases, papers claim to "do SDDiP," but they have state variables which are not binary. In these cases, the algorithm is not guaranteed to converge to a globally optimal policy.

One work-around that has been suggested is to discretize the state variables into a set of binary state variables. However, this leads to a large number of binary state variables, which is another reason why "SDDiP" has poor performance.

In general, we recommend that you introduce integer variables into your model without fear of the consequences, and that you treat the resulting policy as a good heuristic, rather than an attempt to find a globally optimal policy.

+\end{aligned}\]

You can use Lagrangian duality in SDDP.jl by passing SDDP.LagrangianDuality to the duality_handler argument of SDDP.train.

Compared with linear programming duality, the Lagrangian problem is difficult to solve because it requires the solution of many mixed-integer programs instead of a single linear program. This is one reason why "SDDiP" has poor performance.

Convergence

The second part to SDDiP is a very tightly scoped claim: if all of the state variables are binary and the algorithm uses Lagrangian duality to compute a subgradient, then it will converge to an optimal policy.

In many cases, papers claim to "do SDDiP," but they have state variables which are not binary. In these cases, the algorithm is not guaranteed to converge to a globally optimal policy.

One work-around that has been suggested is to discretize the state variables into a set of binary state variables. However, this leads to a large number of binary state variables, which is another reason why "SDDiP" has poor performance.

In general, we recommend that you introduce integer variables into your model without fear of the consequences, and that you treat the resulting policy as a good heuristic, rather than an attempt to find a globally optimal policy.

diff --git a/previews/PR797/guides/add_multidimensional_noise/index.html b/previews/PR797/guides/add_multidimensional_noise/index.html index 316b9c0ab..6bb33e0d5 100644 --- a/previews/PR797/guides/add_multidimensional_noise/index.html +++ b/previews/PR797/guides/add_multidimensional_noise/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Add multi-dimensional noise terms

Multi-dimensional stagewise-independent random variables can be created by forming the Cartesian product of the random variables.

A simple example

If the sample space and probabilities are given as vectors for each marginal distribution, do:

julia> model = SDDP.LinearPolicyGraph(
+

Add multi-dimensional noise terms

Multi-dimensional stagewise-independent random variables can be created by forming the Cartesian product of the random variables.

A simple example

If the sample space and probabilities are given as vectors for each marginal distribution, do:

julia> model = SDDP.LinearPolicyGraph(
            stages = 3,
            lower_bound = 0,
            optimizer = HiGHS.Optimizer,
@@ -81,4 +81,4 @@
 julia> SDDP.simulate(model, 1);
 ω is: [54, 38, 19]
 ω is: [43, 3, 13]
-ω is: [43, 4, 17]
+ω is: [43, 4, 17]
diff --git a/previews/PR797/guides/add_noise_in_the_constraint_matrix/index.html b/previews/PR797/guides/add_noise_in_the_constraint_matrix/index.html index 4b6d273cd..ae7eb323f 100644 --- a/previews/PR797/guides/add_noise_in_the_constraint_matrix/index.html +++ b/previews/PR797/guides/add_noise_in_the_constraint_matrix/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Add noise in the constraint matrix

SDDP.jl supports coefficients in the constraint matrix through the JuMP.set_normalized_coefficient function.

julia> model = SDDP.LinearPolicyGraph(
+

Add noise in the constraint matrix

SDDP.jl supports coefficients in the constraint matrix through the JuMP.set_normalized_coefficient function.

julia> model = SDDP.LinearPolicyGraph(
                stages=3, lower_bound = 0, optimizer = HiGHS.Optimizer
                ) do subproblem, t
            @variable(subproblem, x, SDDP.State, initial_value = 0.0)
@@ -20,4 +20,4 @@
 julia> SDDP.simulate(model, 1);
 emissions : x_out <= 1
 emissions : 0.2 x_out <= 1
-emissions : 0.5 x_out <= 1
Note

JuMP will normalize constraints by moving all variables to the left-hand side. Thus, @constraint(model, 0 <= 1 - x.out) becomes x.out <= 1. JuMP.set_normalized_coefficient sets the coefficient on the normalized constraint.

+emissions : 0.5 x_out <= 1
Note

JuMP will normalize constraints by moving all variables to the left-hand side. Thus, @constraint(model, 0 <= 1 - x.out) becomes x.out <= 1. JuMP.set_normalized_coefficient sets the coefficient on the normalized constraint.

diff --git a/previews/PR797/guides/choose_a_stopping_rule/index.html b/previews/PR797/guides/choose_a_stopping_rule/index.html index af5884ec5..a80a437d1 100644 --- a/previews/PR797/guides/choose_a_stopping_rule/index.html +++ b/previews/PR797/guides/choose_a_stopping_rule/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Choose a stopping rule

The theory of SDDP tells us that the algorithm converges to an optimal policy almost surely in a finite number of iterations. In practice, this number is very large. Therefore, we need some way of pre-emptively terminating SDDP when the solution is “good enough.” We call heuristics for pre-emptively terminating SDDP stopping rules.

Basic limits

The training of an SDDP policy can be terminated after a fixed number of iterations using the iteration_limit keyword.

SDDP.train(model; iteration_limit = 10)

The training of an SDDP policy can be terminated after a fixed number of seconds using the time_limit keyword.

SDDP.train(model; time_limit = 2.0)

Stopping rules

In addition to the limits provided as keyword arguments, a variety of other stopping rules are available. These can be passed to SDDP.train as a vector to the stopping_rules keyword. Training stops if any of the rules becomes active. To stop when all of the rules become active, use SDDP.StoppingChain. For example:

# Terminate if BoundStalling becomes true
+

Choose a stopping rule

The theory of SDDP tells us that the algorithm converges to an optimal policy almost surely in a finite number of iterations. In practice, this number is very large. Therefore, we need some way of pre-emptively terminating SDDP when the solution is “good enough.” We call heuristics for pre-emptively terminating SDDP stopping rules.

Basic limits

The training of an SDDP policy can be terminated after a fixed number of iterations using the iteration_limit keyword.

SDDP.train(model; iteration_limit = 10)

The training of an SDDP policy can be terminated after a fixed number of seconds using the time_limit keyword.

SDDP.train(model; time_limit = 2.0)

Stopping rules

In addition to the limits provided as keyword arguments, a variety of other stopping rules are available. These can be passed to SDDP.train as a vector to the stopping_rules keyword. Training stops if any of the rules becomes active. To stop when all of the rules become active, use SDDP.StoppingChain. For example:

# Terminate if BoundStalling becomes true
 SDDP.train(
     model;
     stopping_rules = [SDDP.BoundStalling(10, 1e-4)],
@@ -21,4 +21,4 @@
     stopping_rules = [
         SDDP.StoppingChain(SDDP.BoundStalling(10, 1e-4), SDDP.TimeLimit(100.0)),
     ],
-)

See Stopping rules for a list of stopping rules supported by SDDP.jl.

+)

See Stopping rules for a list of stopping rules supported by SDDP.jl.

diff --git a/previews/PR797/guides/create_a_belief_state/index.html b/previews/PR797/guides/create_a_belief_state/index.html index 6da963d77..a2b7849e5 100644 --- a/previews/PR797/guides/create_a_belief_state/index.html +++ b/previews/PR797/guides/create_a_belief_state/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Create a belief state

SDDP.jl includes an implementation of the algorithm described in Dowson, O., Morton, D.P., & Pagnoncelli, B.K. (2020). Partially observable multistage stochastic optimization. Operations Research Letters, 48(4), 505–512.

Given a SDDP.Graph object (see Create a general policy graph for details), we can define the ambiguity partition using SDDP.add_ambiguity_set.

For example, first we create a Markovian graph:

julia> using SDDP
julia> G = SDDP.MarkovianGraph([[0.5 0.5], [0.2 0.8; 0.8 0.2]])Root +

Create a belief state

SDDP.jl includes an implementation of the algorithm described in Dowson, O., Morton, D.P., & Pagnoncelli, B.K. (2020). Partially observable multistage stochastic optimization. Operations Research Letters, 48(4), 505–512.

Given a SDDP.Graph object (see Create a general policy graph for details), we can define the ambiguity partition using SDDP.add_ambiguity_set.

For example, first we create a Markovian graph:

julia> using SDDP
julia> G = SDDP.MarkovianGraph([[0.5 0.5], [0.2 0.8; 0.8 0.2]])Root (0, 1) Nodes (1, 1) @@ -34,4 +34,4 @@ (1, 2) => (2, 2) w.p. 0.2 Partitions {(1, 1), (1, 2)} - {(2, 1), (2, 2)}
+ {(2, 1), (2, 2)}
diff --git a/previews/PR797/guides/create_a_general_policy_graph/index.html b/previews/PR797/guides/create_a_general_policy_graph/index.html index 2a529f832..96893fd98 100644 --- a/previews/PR797/guides/create_a_general_policy_graph/index.html +++ b/previews/PR797/guides/create_a_general_policy_graph/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Create a general policy graph

SDDP.jl uses the concept of a policy graph to formulate multistage stochastic programming problems. For more details, read An introduction to SDDP.jl or the paper Dowson, O., (2020). The policy graph decomposition of multistage stochastic optimization problems. Networks, 76(1), 3-23. doi.

Creating a SDDP.Graph

Linear graphs

Linear policy graphs can be created using the SDDP.LinearGraph function.

julia> graph = SDDP.LinearGraph(3)
+

Create a general policy graph

SDDP.jl uses the concept of a policy graph to formulate multistage stochastic programming problems. For more details, read An introduction to SDDP.jl or the paper Dowson, O., (2020). The policy graph decomposition of multistage stochastic optimization problems. Networks, 76(1), 3-23. doi.

Creating a SDDP.Graph

Linear graphs

Linear policy graphs can be created using the SDDP.LinearGraph function.

julia> graph = SDDP.LinearGraph(3)
 Root
  0
 Nodes
@@ -110,4 +110,4 @@
     @variable(subproblem, x >= 0, SDDP.State, initial_value = 1)
     @constraint(subproblem, x.out <= x.in)
     @stageobjective(subproblem, price * x.out)
-end
+end
diff --git a/previews/PR797/guides/debug_a_model/index.html b/previews/PR797/guides/debug_a_model/index.html index 35f9aa361..69d5c6918 100644 --- a/previews/PR797/guides/debug_a_model/index.html +++ b/previews/PR797/guides/debug_a_model/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Debug a model

Building multistage stochastic programming models is hard. There are a lot of different pieces that need to be put together, and we typically have no idea of the optimal policy, so it can be hard (impossible?) to validate the solution.

That said, here are a few tips to verify and validate models built using SDDP.jl.

Writing subproblems to file

The first step to debug a model is to write out the subproblems to a file in order to check that you are actually building what you think you are building.

This can be achieved with the help of two functions: SDDP.parameterize and SDDP.write_subproblem_to_file. The first lets you parameterize a node given a noise, and the second writes out the subproblem to a file.

Here is an example model:

using SDDP, HiGHS
+

Debug a model

Building multistage stochastic programming models is hard. There are a lot of different pieces that need to be put together, and we typically have no idea of the optimal policy, so it can be hard (impossible?) to validate the solution.

That said, here are a few tips to verify and validate models built using SDDP.jl.

Writing subproblems to file

The first step to debug a model is to write out the subproblems to a file in order to check that you are actually building what you think you are building.

This can be achieved with the help of two functions: SDDP.parameterize and SDDP.write_subproblem_to_file. The first lets you parameterize a node given a noise, and the second writes out the subproblem to a file.

Here is an example model:

using SDDP, HiGHS
 
 model = SDDP.LinearPolicyGraph(
             stages = 2,
@@ -68,4 +68,4 @@
 julia> optimize!(det_equiv)
 
 julia> objective_value(det_equiv)
--5.472500000000001
Warning

The deterministic equivalent scales poorly with problem size. Only use this on small problems!

+-5.472500000000001
Warning

The deterministic equivalent scales poorly with problem size. Only use this on small problems!

diff --git a/previews/PR797/guides/improve_computational_performance/index.html b/previews/PR797/guides/improve_computational_performance/index.html index 2d482dae3..704702470 100644 --- a/previews/PR797/guides/improve_computational_performance/index.html +++ b/previews/PR797/guides/improve_computational_performance/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Improve computational performance

SDDP is a computationally intensive algorithm. Here are some suggestions for how the computational performance can be improved.

Numerical stability (again)

We've already discussed this in the Numerical stability section of Words of warning. But, it's so important that we're going to say it again: improving the problem scaling is one of the best ways to improve the numerical performance of your models.

Solver selection

The majority of the solution time is spent inside the low-level solvers. Choosing which solver (and the associated settings) correctly can lead to big speed-ups.

  • Choose a commercial solver.

    Options include CPLEX, Gurobi, and Xpress. Using free solvers such as CLP and HiGHS isn't a viable approach for large problems.

  • Try different solvers.

Even commercial solvers can have wildly different solution times. We've seen models on which CPLEX was 50% fast than Gurobi, and vice versa.

  • Experiment with different solver options.

    Using the default settings is usually a good option. However, sometimes it can pay to change these. In particular, forcing solvers to use the dual simplex algorithm (e.g., Method=1 in Gurobi ) is usually a performance win.

Single-cut vs. multi-cut

There are two competing ways that cuts can be created in SDDP: single-cut and multi-cut. By default, SDDP.jl uses the single-cut version of SDDP.

The performance of each method is problem-dependent. We recommend that you try both in order to see which one performs better. In general, the single-cut method works better when the number of realizations of the stagewise-independent random variable is large, whereas the multi-cut method works better on small problems. However, the multi-cut method can cause numerical stability problems, particularly if used in conjunction with objective or belief state variables.

You can switch between the methods by passing the relevant flag to cut_type in SDDP.train.

SDDP.train(model; cut_type = SDDP.SINGLE_CUT)
+

Improve computational performance

SDDP is a computationally intensive algorithm. Here are some suggestions for how the computational performance can be improved.

Numerical stability (again)

We've already discussed this in the Numerical stability section of Words of warning. But, it's so important that we're going to say it again: improving the problem scaling is one of the best ways to improve the numerical performance of your models.

Solver selection

The majority of the solution time is spent inside the low-level solvers. Choosing which solver (and the associated settings) correctly can lead to big speed-ups.

  • Choose a commercial solver.

    Options include CPLEX, Gurobi, and Xpress. Using free solvers such as CLP and HiGHS isn't a viable approach for large problems.

  • Try different solvers.

Even commercial solvers can have wildly different solution times. We've seen models on which CPLEX was 50% fast than Gurobi, and vice versa.

  • Experiment with different solver options.

    Using the default settings is usually a good option. However, sometimes it can pay to change these. In particular, forcing solvers to use the dual simplex algorithm (e.g., Method=1 in Gurobi ) is usually a performance win.

Single-cut vs. multi-cut

There are two competing ways that cuts can be created in SDDP: single-cut and multi-cut. By default, SDDP.jl uses the single-cut version of SDDP.

The performance of each method is problem-dependent. We recommend that you try both in order to see which one performs better. In general, the single-cut method works better when the number of realizations of the stagewise-independent random variable is large, whereas the multi-cut method works better on small problems. However, the multi-cut method can cause numerical stability problems, particularly if used in conjunction with objective or belief state variables.

You can switch between the methods by passing the relevant flag to cut_type in SDDP.train.

SDDP.train(model; cut_type = SDDP.SINGLE_CUT)
 SDDP.train(model; cut_type = SDDP.MULTI_CUT)

Parallelism

SDDP.jl can take advantage of the parallel nature of modern computers to solve problems across multiple cores.

Info

We highly recommend that you read the Julia manual's section on parallel computing.

You can start Julia from a command line with N processors using the -p flag:

julia -p N

Alternatively, you can use the Distributed.jl package:

using Distributed
 Distributed.addprocs(N)
Warning

Workers DON'T inherit their parent's Pkg environment. Therefore, if you started Julia with --project=/path/to/environment (or if you activated an environment from the REPL), you will need to put the following at the top of your script:

using Distributed
 @everywhere begin
@@ -45,4 +45,4 @@
         env = Gurobi.Env()
         set_optimizer(m, () -> Gurobi.Optimizer(env))
     end,
-)
+)
diff --git a/previews/PR797/guides/simulate_using_a_different_sampling_scheme/index.html b/previews/PR797/guides/simulate_using_a_different_sampling_scheme/index.html index 5fb36c118..ae0e75598 100644 --- a/previews/PR797/guides/simulate_using_a_different_sampling_scheme/index.html +++ b/previews/PR797/guides/simulate_using_a_different_sampling_scheme/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Simulate using a different sampling scheme

By default, SDDP.simulate will simulate the policy using the distributions of noise terms that were defined when the model was created. We call these in-sample simulations. However, in general the in-sample distributions are an approximation of some underlying probability model which we term the true process. Therefore, SDDP.jl makes it easy to simulate the policy using different probability distributions.

To demonstrate the different ways of simulating the policy, we're going to use the model from the tutorial Markovian policy graphs.

julia> using SDDP, HiGHS
+

Simulate using a different sampling scheme

By default, SDDP.simulate will simulate the policy using the distributions of noise terms that were defined when the model was created. We call these in-sample simulations. However, in general the in-sample distributions are an approximation of some underlying probability model which we term the true process. Therefore, SDDP.jl makes it easy to simulate the policy using different probability distributions.

To demonstrate the different ways of simulating the policy, we're going to use the model from the tutorial Markovian policy graphs.

julia> using SDDP, HiGHS
 
 julia> Ω = [
            (inflow = 0.0, fuel_multiplier = 1.5),
@@ -165,4 +165,4 @@
            ],
            [0.3, 0.7],
        )
-A Historical sampler with 2 scenarios sampled probabilistically.
Tip

Your sample space doesn't have to be a NamedTuple. It an be any Julia type! Use a Vector if that is easier, or define your own struct.

+A Historical sampler with 2 scenarios sampled probabilistically.
Tip

Your sample space doesn't have to be a NamedTuple. It an be any Julia type! Use a Vector if that is easier, or define your own struct.

diff --git a/previews/PR797/index.html b/previews/PR797/index.html index e6cce61e6..b1de778b4 100644 --- a/previews/PR797/index.html +++ b/previews/PR797/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -
logo

Introduction

Build Status code coverage

Welcome to SDDP.jl, a package for solving large convex multistage stochastic programming problems using stochastic dual dynamic programming.

SDDP.jl is built on JuMP, so it supports a number of open-source and commercial solvers, making it a powerful and flexible tool for stochastic optimization.

The implementation of the stochastic dual dynamic programming algorithm in SDDP.jl is state of the art, and it includes support for a number of advanced features not commonly found in other implementations. This includes support for:

  • infinite horizon problems
  • convex risk measures
  • mixed-integer state and control variables
  • partially observable stochastic processes.

Installation

Install SDDP.jl as follows:

julia> import Pkg
+
logo

Introduction

Build Status code coverage

Welcome to SDDP.jl, a package for solving large convex multistage stochastic programming problems using stochastic dual dynamic programming.

SDDP.jl is built on JuMP, so it supports a number of open-source and commercial solvers, making it a powerful and flexible tool for stochastic optimization.

The implementation of the stochastic dual dynamic programming algorithm in SDDP.jl is state of the art, and it includes support for a number of advanced features not commonly found in other implementations. This includes support for:

  • infinite horizon problems
  • convex risk measures
  • mixed-integer state and control variables
  • partially observable stochastic processes.

Installation

Install SDDP.jl as follows:

julia> import Pkg
 
 julia> Pkg.add("SDDP")

License

SDDP.jl is licensed under the MPL 2.0 license.

Resources for getting started

There are a few ways to get started with SDDP.jl:

Getting help

If you need help, please open a GitHub issue.

How the documentation is structured

Having a high-level overview of how this documentation is structured will help you know where to look for certain things.

  • Tutorials contains step-by-step explanations of how to use SDDP.jl. Once you've got SDDP.jl installed, start by reading An introduction to SDDP.jl.

  • Guides contains "how-to" snippets that demonstrate specific topics within SDDP.jl. A good one to get started on is Debug a model.

  • Explanation contains step-by-step explanations of the theory and algorithms that underpin SDDP.jl. If you want a basic understanding of the algorithm behind SDDP.jl, start with Introductory theory.

  • Examples contain worked examples of various problems solved using SDDP.jl. A good one to get started on is the Hydro-thermal scheduling problem. In particular, it shows how to solve an infinite horizon problem.

  • The API Reference contains a complete list of the functions you can use in SDDP.jl. Look here if you want to know how to use a particular function.

Citing SDDP.jl

If you use SDDP.jl, we ask that you please cite the following:

@article{dowson_sddp.jl,
 	title = {{SDDP}.jl: a {Julia} package for stochastic dual dynamic programming},
@@ -47,4 +47,4 @@
 	journal = {Annals of Operations Research},
 	author = {Dowson, O. and Morton, D.P. and Pagnoncelli, B.K.},
 	year = {2022},
-}

Here is an earlier preprint.

+}

Here is an earlier preprint.

diff --git a/previews/PR797/objects.inv b/previews/PR797/objects.inv index 3d4e3d2fce2673a0b1e5a5a7a010794a95621d3a..3a765412ff882d7b1655fe3f7cb6f75c0cecef46 100644 GIT binary patch delta 9655 zcmV;oB}m%)O5sb8e1B_i+s3lr^(z+032qFOwn#l}=S%C@O_Ot*2jVpSP#g%fL`!Ta zQKck1ZgKwmotb@-yA+oUFHUh{kvp@?eaw4i)6F9J#&-L~%SAqCLCDyfe4PyThkah8 zadvi}7Q6jyw@BJ~sp0SUz**?!R1N<>AHt^UzvGU~^`#mofJC@%w@kk?xi*@V- z#uu<0@G~fqWq*6^wb2>-pq7?DF(!YRrFXM@d+(lo`4*-j+b6{) z=KCWr9`WPW`64lwV!f!@-}vH;{roLnZ?oj3*b?kUe19JMXwPH9+s-!0IUd;%m-@E2FHZEdS~`Xk6~;x*nfhvdA`~03ncEdb-XXqZ#6w12jU~( z{R#e~+c4fn4q?8^b3hftqUPHz(C%4raP9?n!!veG&lysA{+!`+g}z8|uf(%uvPodN zXPi&Xh)=nCIo2LJqu0=xcwJ(9vd=|wcUa=SMGETZqxAj^C?)*F{fj>(j&u!#-~s1$ zy?;HxHsZ-P!A_+t0Jr z)1t`nq`?wpWi++T&KN%9e1%>F5*{`H5W{t)kF63(XPRf!_vLJE_Nf~fJgBDczYgjA zD=b5Cq(j}t?8ShzP`_!MwKwGndsCVw+;fWqZaKtlm&1+ zkLi9@qWUHM!{|55W_MW($KBnNdW{U#6y2|~jJ={a1~Sk`_6Fse0ms^wrH-byFh}^p zl(3J0;0}1K2y!0H?N0c7!Q76{pd@mY$a*?sU=plAcF1hpZ1V)-2^J%1#4JM{$( znrfYsSc$uXYO6E$NxMRc|lEIqEJjHu5c zO94T2CYXs6yQ7`vM@jKNJDU|fB&&1|ADHEwa5?c-8=bK){0Wm!27mE3EfQiDWi7~j z4utdLTlPu*927|g8aph1Ua_l{UVs?Re=_z<6}VQ)z8WZdg58U0numTA0+Mef0&Y*Z-NwsRvVY&F0a~qVhb=AIfK5R# zf*2?}Z6HnXzN4~C_b}tp=`eWlE`AC^+76@2c7QX!SteNyBN2^Z?Wnm}xsh182v-hX zu#nbmbTW98Kk^C8=Lck~w7r9L7wkyV1vzw4?hD2UFnrET7?$ERNtl7q^l^P~?SLL{ z7HA0PCX|4C4u8s+y^JzA)e1E;QCFN2@U`$ioV8-U3c{(!#9@1yI1W*nJNO^t*cf0F z(^qjR1ey*9%B4`V@Z^(#&++m~K>yRzD0p#oSRQtJNfwTO4g$dED3Hqcz-uP($#V=J zbgRrTj-@$bhJ+6Zh~iC7!}b&pfQaI4l3+A+ewvfw0e@IO6b}(eibrFp%+&3uAX#QG)>*H%VBL1~<5d3rTU-(>V!1w?e43N

oMxNC)gO9H3uB|miIw1)k2<(SON^O#R?ZRB4&jFxI^m=+a z3SQhM+kbtszDtVW!t1n-a77Ft>%5Mb#>p<^J{0zc;3nD4i*##$K#d6uze3EN0Xn!^ zAm4@~9i$tY8*R)F3@?F;G&J}Lkk&PO3h-EsPtV&QD*%FHb(75b^o8d%>5X= z79!#&%NU1P$4DG5Xv5Sn84Q!rFbCw;4)XgzxPP^4cA@fv!*p7n@Lv+Xr&md~wb-_J z$?)X>1~NJgN0L8Ae73?PTT}oj1{ui@3&n@wYsn!a{#oG(o`y@5UOs?qY^+OFqAeeW zOD@7_n2H2?iE@m1E2j{IQ0}J;l%Fy{KV=XH(oZRT$j<#+>;1%S>*xP`3&IE;wcuU8 z*?%X&bzEe5Ap9fNv25=?Aw5HM!Ke#HT`=7RvEX4BrDMpIj(Bs3LH`-~GvQ%hYR6Ek z9dZ;E5;!EqgTG`WNdTp3B9NvD_rcM=bDGDpIYW@0i!Nm6qC|Es+mM}$zR1pnMRqPN zvU6dQol8k}NV<{Qg^yS{G1QByOgT~dg?|z#fBpvaeXcm|Ixdzeh>bwy_ApFhoplm& zMK#ixI8PUt7-8Z}csF>1Y`#e6&_k~WPvbRD8@U(p)8aq{M{(UIh~X=Mr6EN2)J@Tm z*n!gM&@1lCE@7ubf8OQLp9@O^$h<8K4d;EG_;bV1fJ3NI`Ezb(0I6G<89qQWLx1@Q zl*KPFqNwbhWW`aTrd>v51kH)asz12U)qYk1=%iWvb51RQ_A|AJh)->MiSmCFG++PE zE|?#Unftk4*N)L<90~B}_FK+#c5JxN3~64Z$Nt z^qmUEs2^rIIE%o5Op>KVQ&BP-gnySu#Uf#EF@S_RbtygCF23$9K6MGCfKzk0J{86I^bg`t9)$roHX84By7o6A@AlA*E$R{8N%83IRID@OZZ2NS>r*NVBZ zV$#Oy*^>?6LTklh5SH09R0flYBuH;Te*qfoU^9G2q1|XCwf)G;2sP50d4FWtphi+L zkIWgLo8&1|5B#|P;{^KpslyB)eRwBlfRqcDaOL%cm z&6vwWVI=^eZc6|P4J4^|bbmT&FZBWt?49@dgCKdF#SML=UI6V!>P19+_ZZ_4`)8`Q72Ej=l;>4c#ug^|{jQj7;k|K?WyRi1x! z+u)mn3rUAEu`$BN`AaChS8iKwmBEDC)l@aSPPyR~T}{gHDvW%dT7N?@_&|oOAbg1j zJ&OxC@bNwlQV{$W37w<6Lz*pW%zyqfBKyI!>e&+qVIfAK697X1#sTQW5`eZX@&g-o zusS2ui8A*@PIFJqc@2BAI};|pbLB8>6OwYWM3FQhQP3}GS(_l`n=ZJ^51U2J4lqq* z5|+i7i$k!CUK`+LpR2h00+YqEARfd(hf(E8k8Oqs` z_F&RtRFF;MIr*r}_5y5hOKQ*$XaEL=l)+!<(=Fit@^w(oBxu0SG2!?nF8texO?1js zr{wv z;&Ju#qrcQ51b!3x`RH{yviK##*EE5MTrj zXQ0Ct0}Sa5N+F#!n2!XCqD#UfRP>tD&~~&M18@k;q_)By8gP4P!`{&$1iUr3k|seM z2t#Kn2;@6VqqV6yw|VxEh`1mlIygxFI;0PAX7NO4s(%iC3E~mqL1vV(#xuA<0dI{8 zMjci!SsKwWiG~PuhvOEC<5nLi2fPUFJASL4yLv;5D3Rr_0G5=@>|L zuCa}1lv z3*gnYB@)O*Tl$&{%7fAr$dyK$TuptPOAV8ir z%Pc`BZt$=>PFp zAFEIzH44iWF;bfFNI4?-tOqY%Cx3SZEq^ri(pO36Hh?UB71VC|nnhFX(#8GMVTL1` zVYx>|AcW^p>1m21HN|=u)=?Shqas2fhT7Y+XpH`^p<^GFBA{~ss)iZVm-ixvKgjZY z>np+9HX0jM)m(#$wQRw#g%+d>BWPpgX(Omw#SNj_3!M_Qjn$bT8Svrd$gD73Rq&_~1-x<;+D%H}RQsfZPKlDF@OmoafT zjs~JiSWr^df@<-xZV^I_1;~T}w1a06p-xQUV${k8+LLdQF!4iYjKr2bho;CflB-#& zUDNC@C_4#i3*Y>jKf>;9=S=w*BPi)Bx#mgxb4tbMcCh@O^Gi_-(SIE`<|w4d9g3K^ zZ(zfmHM6uL3_6{IE?C)Xrj5KqMj%2-iac$#J9=~Oz? z??uYZ*2=`aLp$1iT7NAIQRnj&2$+EY4QYs=6n6~oLA6_?Kthk4GBXlG3XdjaHF~&R z>P3s-VUI+NO7w7JUIwql8N`9C#px)%alD+TjL)+RdXn`c*lpu^QfJz?3MqVE3F8rm z)AoF@YhwA7>xu7ejwH?B9$7cFV8(C)DS5=5{HUGtuW5!0ngBD`w#ZZmi%Thb$ zlC}b|IvT2uw12IZhc9+WbP358dNic3CVa7{JeN>eLfb?M3dylq#qE;FiY0ig123?4i3IuwytW$z(Nc6hfHMb z7vEvUurO7v7@`4_4TKe=gI8**i;uN%OHKQ7OHIqQ$A3IB4sX$24V-o!TeX(FD@t(` zj8KTEg6z9$R0$l?`Y*~?HY@Iq+LwUMQT>wU;@qGOp+`I$LYR~xtVFa;u{4RI3!#i@ zBa$(aZ2r>Ah(FS7{?anwk0hJFlxFPLBJ`_;!Wx*__dP|Fa*3xDOUgNV>!5i8I%!V& zsNw0o#DC3mMAgt|-#n7vNZ7nFg<2~T!AnjiM68NPq;RxaaODV4_kjePngH#iVN-?= zz=`Sjy(8Rj+2~xYCW%K~JfYWIn{|eP;HKres*LjBM8CDAgn&*OX-N&?@5NTa5mgLn zZOGH5DJJ>(iz`+;+hcrb3Q0pPB;jgx5b+}tl7A|ijBhSPPeOjA$v4Bj8si74H(kMB~Hse>2sGEO%q&X4o(LAX%Dek7#02i1F~-ZRmq%rnLc zgR+_+Ib==CpjRZFK*S70=z(h&AjXS44~WFkzaD(M+Y#DnIJ#eegaW$}?NqtzUI*OcxWeMh=$)F$0E@=bSD zH*+-C>S(fx`jqPMsMIm|HMm{H+hpM^?%fb(FJWbE@0&c?NvSMIr7}*@2JkffsLkp> z)=JFg4Pnpx5n`GzwpI~{sx`$jP-|_a|4|Fgei1WnB4$X;&5<&-(J|K)8Rt~2kAG7( z4k&q(kz>`3P#XDF29>;WRBxtKh%Io4x}xH>)ugVNUC5bIB?y~dm2!sSJ~sy8|4GL3 z9K+{z-=T3mq>7yJuik#-a>;6{Z0WjBzqsMw1<2s@3je{++qXQ)91_17AP2u~w;g=t z1f}u{i6cY}r8O)Yt&ueDDuWT(Wq&Y!$EXfClS%3>QQ~&KgJ2fVzuL^9DbD9M!Ml8f zZhX9^b6tcqVc>i~m0kXXxw8d2wU3XgRIO72F$q+yDp_AlvOY<$Qcki|Me>1aU^f)$ zE-&d&PUPq6lW%m@iym@!kyUb75Ncz+phH5aUW zbw<^Mo#2G%f>W+Ygf@0;H@d1ZTesr3%keOVqxHwXv-G%M)$tYu$IG?Gv8J|lMW?8; zO;>bwu&Om*`$?Y3r7FB?GfBJXg8<<5Rf#FW+T;Zf7jPcf`tzkA^{M@g!Iv_lXd{cY$xP;A-lZH?2ZH0*Sj&q8iQkp)67ms7)qlN%v2}rv%0f?f zYbW*f3UCd$^f$DP&rSZZq1*VZePl!1_&k)mzI7}g0hM?Cq|L7HC7_QZvUdH%+Vzvt zuAit~Pbb3T=vJMc)2`=??w%#?Syp8b2@q|ayC`Y+U_ayt@$AV zIqpsIEaln@JLHJ!?oH7oacbY2GL#?QAw_n?>mQTC=FUv4`RJbVV@JCBF&~5V!I9J% zRgxpGY$NU$tvMd08%bp~9CatDvVx{1t*e_gw2Z^)z$p?`1AjccK`vJe2j)3P!^X0^ z)vTjomm0ZB~}QFjNjDUCGTg8Ww7h5c;DRA){WO(1iDFxx0=dAh+j z2fUi^JMX}WnbCo^w5J<)S{LErMVPK)TV<) zyj$Jn^>w98*MF%2#}u3}E1n-sU$Kme=dDsrtU^cT)xh@NuDu-CRoqBk5Bvwet*Jdr zzH}(3ejzEBeW~J?waRJJMN~#)+8@-%FX*}W&x@am%U&5!S1hvrI{R+{vd#h?ly;D~M`rX=+!mv}w^EJo<(iriRm?uxfw-M!^iteHqrAdW-52GW zU*TKnXVyWjeo~SI=5BKZEN3Bl85}^u{(mU}uJ-h}N{Y_qNF+l1ZbS)6lYvxxYxI)A zJ7#o8&imNmzUQgtQs6#vnk*ka4}X%7(wQGS-1)qYplm5@0%6-wqphYSbwRD zD#f0(t4kq67pUq2sS?MXNb#~r=dd4u0?*d-IS`AC^C79w;@?anQ(hwCw3yU-myy;mk#0E5xG$sywWuZEPID0u1_{9g?0f)z&SgK*s=Vb!BejJ|B-*0Dlcw%Ih{` z0*>R>u>rcBdH0WsGlVmXejkcr4dOjt?H5;He){CLI=dFzjRe(`FCa2~!if@1c(g2o z6G9t<_cQo4-6V1GUD@*-g$nmC(aX?>Ugn#`rk7!bUWQzrijMn$tI|}|&S87KO6FgW zhtD2YG$jHv6jtxazYgjAD}P+KLw!)OiGRd65%S(IoxcM! z2`WZvT`%L~>Ne7Y5Dg5W=H*y{&oHd`_Tb}H=LYVw% zF;;r(EbK>7lb}$wAVuxD zIx^L^B_$E2Qc#l1Lmh1QoUI(M^j9Mf^<=r%tfgJ?Kv=nRzJCkg;QDNWbGZqyePs`v z^F09PTy23@@eZYqbe&csUK(qB5GdoId@G&X@wiRq$a8n_tTsuW+rhJuh^e1=BnDJt z>L;C}_$iXNIpDY|_D{P-2fs@7HAO!6_}%IDJcgfENI$IH+GmK_=4}%7^+Vm}6+`4U zubB{e>1h#|F@L3=p{HcFNegB^8>n(GCF#4?@k*_$cX-y8j1hH((|r(US+LKS2~JZx zDOez3>7D@+CXtBJl&$TL1f-(8-C@Hul_`6%KXOE7l(!v*Jd#F(s4Ne6C^0o6(^}Jo zJ9T1GZ&H#a5BuKX^C`)wbO113DrAmjn$ccxD5`F5r+?W~gEI-vBsdSSdFN!CRo#{1 zXk=4{0{ z8_GZsq^Rzl7cFLr&#1jg(-rd5uVr>@O*V)yv% zx=+ax=A}sgP8PnOM{qxpy+%t1_%ZRk$SnwJ{d_#Ew3o|*i@D>$M9n1S_EX!{djrVx z!Vi5VO4o1xIOWpe8|~=VHSGO#a{y8Vf2DJ8A|~UKu2< zeKL$jb%T+O_!ie^{{2Ic~t zUS9ovPS?q=s9AYzCMnh`X5EbbCb^Ff*?+zj$%q$!f@(@KuXO18F?n)|Rpg0gu&vnH z%`M3QeT(!-#)E%4WFEu?*&i2L@Jihl$DR|7R+NHH^ZG7@!wowSZ?f_ROBPGdPTp?X zv*a)63YAr?TL>(zW?DwGwpf>zOTsrMg zQh@UR6&=%-;sD;drb#6xcv2A8(+qxQXqNOmv-&Ring99y8Lg(7QgeB7#i5Fb?@H@br1e04;wV>Tm+k&3AM%Du&pdU^C?A@lj5k zFYo0ae!YrQ57H#-MRT+FbpnJ2ejAis}NJvCX z5)42}R#NZ3Z+FiLU_cCDZ0)O!MNIeHeRof?S;XI1zFS-@(mCS+V{g-SJlO4bX_iE* zllvsgce8vKZ*#kq>-!12+^}S`%hJVuzDv>#+okOG`ugMXpMNWZFunUThOdwDY`>Yu z*)9TzXD4cB%M>iq8*QUR9lzmMf8#}8uJ7MqWwf;IfJsxb z53n8ZC(q)CEPu{(T%3HD&X-ZXOXh5wrE~Z-wnnpv=SdDrH(N%3N7=%Ox76l@y^XV& zN1FxTq)!a~%ftt+8oKs9-$lFFJ?V0z6ZS!Gt@y=S{D)O?H%qto?$uXcVHsAtINL*nh|BI>w^yY!g4_kMU-aW=_P_ zW>z=5r0wVQRa`a0Xvt&R_O zRcL8X2kt0-W9r-BVV^8ud!sq79)QG;NxIKxj}hRzTNO`7Q(M?!`os+MeU`3S#8g{G zYpFY3EPrOvY`tIYl0~wPH@K0}N}M>2)inSgI2g=g0bNrwZnYEVSC6qcm@LV^&en01 z@3Z=z?MA8_{Ef-qY?MI2;NC~8q)6`EEl${t`NPIind>G^^11=K8;gNfS3_XJQ<&R5 z($6rq8626}Jl*8G42kh<9qqE@TTRc0fy4+ze}9r+m^MtbQA3z7(-creu&L>G3$%8Y z?VV@A-SC86(eDhYviP0h?;3p;<5`Jj5Ai03<(>(?G9$j?>g7;-=!`x?XYzA_O{qQ? z@!kFb&n;3sM<0dnCqUWYPdvZ)r^1n};o?5x+OD^IIL2asw{ZsKbR7Z9bl`1-fm0`% zuzx}mW;Id$=LsAjT#0>-OC8M?$^AWW9@yg?Neup&&zJGqH4uN`6Lu@xGHMIUpxZLl zmWj5NExn6ZNqjG;!J)0R+gV#EeilSx5P#dQq75y`37pq7d!CV8W#g@DLzh{IVDK9g zp9&a>X4GK9Zzt>%wum0%j99p&?)k}^%YWO?v*q(5OYx?`7OjeCYMY%f{7vuy`oI-D z8~`AO>rx+EEkeQ_-T>)d(+<@2wcW zB^mPPMS@qcq&L+z0{#sWuKi0r!he~P?|l%f#xQi01#mo1$!=Mo`UQQ#>UXu8-K`?H z?(UxSXJn|hsC~IwvDfs)LI(Q7&Z1m1;8@$D)X~%y)(AgX)a?*@JONK7LC&LjwG**k zur|qkGA9ONO-pmMY9HbgD8@i5L4R01g9eXE3JKoH2J{``U9Q>OB}PXiR)6B|pxo+& zy*57;d60!WO~2x~Xy7^3W`JG%<{Yw+&WJq$tBptkb|Q8)JYm1+Uzz^ZAQyc05f(9i zb|SSKNVdYnU)EkFn|nB+AfGHjq5B*4`$m3s7zCvd@tw)J|;@hcg?C4H@ zF&%AEILcWH8@Je36r=i9Cx7fco-zE1$)60)lD(ov=l5t1y%BfCw z%Z-2u{CfTILN#DP4Zhuz`|gw*mGOlZK5aFaHBdxnOUcrsTFQv}9I_M;L}!wjII%n0 z8GaOJ|C3i)(PO+!<}krkDhQVoZ@JM4`yzf}>X$*ZO|qDnMNtc~e}4zU`SFH*QhyGz zcm*0eY=2s^gB3o27%u)~^yk19>$wo?33e3$IX$Qmy!e+>C-~zrI6gZb{^!+lx7+3y zCnpa;&G&c5ppKoSi}dM4+HOzeuwb2|hZ6yG#9#e;zk253_y-;i!|_pZD8-4qcsY43 zz+O(SUxsJY-^1Q()Stp2Se%8!m14jnHE3X?SxQ) zd!_B6T-y_>z1XI7=tmalRxo)%6(l-3UZj|prHu*u~sxd6DK!(6)n zYBrvH2rwNlF9P&GJr4P+%l*SX-zl@?p#s+!iD( z-r34F@}j00OuDYSw$>EoG$ec=u*?``zq*aLyLf#UXZ+0T zwvKQ|EPtTNeH}4_lbxw^DD4mYI?m@=vaL>_#sr45kZWgw4lWnSx8X|jWJ7CXjQQO1 z61Yf1gP#CtU9+bEuhsbYwEeXLAh=f7@mwrla7v3F`rIr4?dN78B5|{i7l zOb?U6Fc}SVKyGNB-gD`Ot~rI;Zw<>?@m&8Bi+?k{j91$V+m;_0ejLC+M#sTO@yAfC zR&d~mN&w9uLp8BLP7FUQ4jGDR1xI)r&QW^#0J5>QFJ+0gcp1)x2xDMM66iU~G4iWg zLJ&fQpOR}oB}YFckGS$vN}sRu{MLFu@!0zLKW{)7p{vH*HnJJ< z`1q^~#W*WajPo`WbuFMKMS)QHp_&L^!cDgR(d{(W-%#1%Lhq zq{B=~1Oc#9l5OXko+uXo1awJ!>J z261W;*KtTfE#VQ&0=N-Ete(5+C<;3e2@~Zc+|UK=bYD-q+}BfSBmi->g^l2}ugiLB z*$8k5B?5gaOave>YYV{#XlCgRTpR1S6|hu2NtKW#RFaFRtoSq;#Po-dm)fr?0Dm1d zOK#4k1<-z$77>Z1Z7&S|Z;Y1b|H=7$FC+mpS;ahBJ-}-2mg|~Db_7ek5U5^;r}fBG zZ%j5I*Hp}OgTd{7y^hLuwAyGl(jnjJIEwmV7UQ4@49FxINemSwjXyX)DA7n$i3KDz zp>s)k$zWngFX{b&b8v&>(6)~YVSnDL2yOKi!y-JYH5x++HB%piUNMx0K{tags}w_R z#4EGXskY%+d=yQ49kll|q0kCD!=#PDvM2TZ%qX;S5EYAMplux!MUZYl^#B^|U^C)G zp~+{Y6!p-{el#*lbyzVUjg(Fu+ABUi4!e-4@TkccR70e~zKB#3W?RKXxz;&$S=xnZa{U0o3#c>BulXWXbUvuTe6O{ z$TLr2X1kO>q#V7>zPG!87YEghxi}P70Faup0-&fql6ptSllD?C0KwULpFT>GCs^Fj zN9qO8exzPRBuAw529=T54+UYtWO)m@tWxtAlsvbhnX#P>sv&e-%^O4L_=sCkdM*coN1Wb^N-Gi# zOgA_Q-H~@tpOGm@=zm&9J6ejT<@1mJQi~M$ZRnSyH^rdf3x9^6X#o+rWP0N+OmA$Q zO)Xq&0M{4G8yB5TWj+z$WsuG$%_uJOe7`0N>&+_&Rj3`lK0cQ~ct{mQWniIp_-X~) z&X;l>Pgkf64X?2OO^4-Qr4aG!f|y%`E=0lr;_%p)!horU;~?=X$lX4TxLtH)S|^jLwtJpOB+JVvVuPjqJ4 z;7kyYjO?)kgbkj-H41oZR50qWddbp=hAA{es5@M@K!0Ah`WPQ4mG0@H1=dCL3mYoi zzep~tTnB&amxM3dzQoT1VZmuIF%?PsnNUsI*n>>4B)?p(_-hkwS-s)zcuPEY&WKB) z&Nge^Wq<3Wn~H3ua4Ro<>Kl*d&I*T_`Q?Qd@@lxrtMLsWaeoZnz>RjgoZLZ*T&Z)deGCHXsKN1Gr_S}_Vvv{$>YSf5|6`P4X#2{Ko2b^B zn_L$co!>Vux?>YdNSK9WprX{dj8f;tY&iEBY>2ccbIHM2``a3C5rc#gVx2aeudxQj~Y>t_35bcU5LRPf7>M zm%~W}$l59>blQGWI#xpX$#}knBM!14PY-H9wymTD$t0w(>5XlVX6WZR8W;VKVby;$ zwtpRvVIdl0)Fm2YM9~=9E_&XEF7_0RG19>pW*8N^cxIVr&lO?3MDv!Z%z}n`*_EH* zexS0;yEKbA%=Zw1DPcnpra*u^Z5~!JI&t}9ek@`q@vrV8uffOwBhwgF09IsJG(}*& zmzEYDJ*1tZuOj2dq42Bs(kjCMDW)H60e>cM!$?HD@~DfjKGMQ^i$JMSU&8vRAgq^f z=12+a`bxelSBLnX*oz9XYV?H=AEq)oCGZR_oa$hbz#)uGWjINoWf8eXd=4sKJ7bKx znFwh#T#+=%O@V=0&9+ctz}9FBMFxc7rL}XHfNVlI^!Lw&{dji;q7oq-mUNF0t$$;h z5Dt~BR}Zidu#W{N`R2z_ptr6DWIby^XEBm?`o-!)`p4S6s}Sx ziHI-kAn1=Nm`JLiBe&C4l5ke3V*9JDgz-ruS!o#9O@-D!>C-9k+~`|6k=$+J&VS` z{Te#C6h+SbEwB%2shp zsP;mq1#PQ!sXmHjV6aQA(SJ*&*8ByF#-eR{=h1Z7ZOZ|2Ad_WzA;-3gs(+GY3wSDd zwqM+a=4QW!>%hsX4tyKs-(eH1U~(NdRX_q8$aty_5o}YLe^wv(P!mnnQvlo4%0Kgt zrKlX91EuMVhiu&74J>=e#%*19Cb0^uF}BL3qVkPEFoSBsNJAiJ;7%CPPNJZiFoHfJ zq0lvIon@5RO@JQUj5yM~5KKF$H-F zg{oCrQ%?b&V;$fr$562ePn1fh)0uwH5@EL1Chi>VX!l9EqC=g}S0Z2*0yLx{fj%%b(LAm*?N@3bd|pc9k$^Myd@M6G|2lXhEa-~_6rEW713ujC9p-VbQUwAZ zQ}|c`A8c%bn#2K`={hF@sze)A!ArguQIK3`s`0!SnbAX{TYrn{nu3B&l!?yINHZ}O zx+*oCC$bw|clE5a8$B)SA-3q2UpR};S2Q=Utp zEun3q1f}HIEPvxx?%1%SJStYWgX$X`o7nQ2DmgVP+>`Db92X+TV z=Bk2swcZ?D{A$IyvK0esB;4;;GTA=+4l70lQ`w3k8!**CS}{6!rKYBCSPQq*v@f^R zv{*LGBjfNE-POQp=TW6hl6NsEfr1eV8C9_QuJTe6hkvsE%gU3@io2t1C17(DuB5d% zwP-`&5zmGYHf0D(5p7efDx&E^pkmsHWQ-J>Kld`?j|`hXuNd%0ip`%ZGxi%9`qe^V z<#zUcPZ6a;;wi+E3Xa}7Xr6$Mnv*_iczQ2!^8!&h^w~F$T7PZ0b_D3BH~6L|K>J|Wl;s0(Vmkc9hwxiAI#$_#p{N zB~8XR7osO2Kh$zkMLDu^A%co>r0v|N=G>oot$*^VeeBOFYkX=>{h3(xo>j4LZR<#lI8kdJxwWk#@)4HV zS`~xoAzq>G`K*F5$ix4_P2S-=s`_D_@rF9(P(Tdc@7J*`-Ls4C5|pX=tLPz0Hb>6M zd4KpITqxl@5>h;a`a9F#ne0;K8DnukRS1wAvbJT=E0RtkViqFu=vs(Mt^mBmE+fQY z=UmYvfHYqC+4m$e5hh!e^b92uL34-V`HHgm z$du7)4!&zjcMZQI-8F2J?i%{0yXr?Znty9`G+9M`N_BWt>KOdWZxDl;O!LLoIs#F*rdS4QZLIV^8ll-Q zV#ZCxEQz@}Ql>UK=87WYoQm~f$|e96Z?bZ%x)Dk%zsjPLmyW{AlnSv04pCPRynnWu z)FrFdV5U?F!e(}*n4!2!twH!t$=aSn_`IGLv|eJ=ku&~<*AKkoSWcBKysYULH~hN< z8GK&ie{g!cmN!{I;+z3;@Y{CV!Bnb7&+aSLfh?4N ze5AV54TZ6bdoh$ro-L|a|Bh7`5=>Fpz|sEEhkzDAkcQ(gSnob!gajX{nCUF7*$S>^ z5pOw(tN2_++4dacf|yECt`LGY252|BvK8925-+-m_=AHrzrVBGw_m027Jo&)i}k&+ zj&?VDAE+)n7caPGze~6gE_G*$SLe>!5 z=+u*XDnd52ijcJpmlQ_IX@3_Z<+M;%H9T+QpPKe%rkqyls)pwkFV(b=*LZHZbVCfD z)mFz-FVpe3V5vN7)~In9K1>8;D1dD2XCSe9vw=L z+f#NJPP-m;dUBN*SW#g-%sm{>(!Rjo6eoA%YI?Z#!+wYHtN-71^$Xq9)z&W>`u@Ft z_uYm*#33){m3|4Mbqa>K9UY2;P07#Mm>XVmJ9;V*Hr0;K?&;>OSrP#`o(u6TgW3y6 z5{T-a3o#^dT752LsDCEjA-!}YY6p|fi+r77W69cb#?G)yeOWDX zP?0k`PPJf2s!MOEd-~UwMj9Ufy)iabKf5c{Xj26TNMn~K4}WcGPB!>uk1x`F=hH6{ zvpO(V>STjor~_dvQ4W+&1Syyv9nP7r^PluajsiuGTIAtVib(c=3N2qm`SLEUugPS) z9W6Mf;iO6K^kC{o#UOXuDovwOB*=adw|cf~-^6tl22!u${vjSIYR{6-9r~ePILCQk z+VFX;e%SQ>l7AJ|^#}Fw3tIjE^XjK8zIXe98=Tx0LYSmMDJJ@IJOsozBr5{UG#c1( zOB6W(Lesu%n>%QMEn{E;15(RPPQ*@^QzQTKyiSO#^Di4jx`Z0}mn}qO!4bZG4ksUJ z8Wx9-=F6Hc?4$|-#|VxoI4pLivuc})c)hU_Y)d}DCV$(zD}zR7siAtA$nf2|GMDv~ zU&B)buT}3-q}w;v1SqTNqb>>jAHL%Sn{v&R&!iQIe7$)FBn3eIy&4uYQ_$86xvq_B zeiqj_Nyb+3MXx~iaiRL6XEp*_(F;*DO~p!!0XCBJc9MvLuxoww-h|?ym}lVWSTbRi zXl_ncWPd6rM`k;xryW*V!DM18(L@cOnEF(#YRplB0%}$+IoBXo7fKy7jYEd-o_7h~ zJ@4|A9RcZ0qe1eL@DQ`jFr*)%vP1hT~y^UsmcfI0oE*M zPlD-K>&b0O41N@W<-b%T>>3WF0Axal&^Ju7V1HeH@?*pM&LX?lx>)rH^aYdbT9@$F zO$P5GxP0>2xca)8O|~K%OQCQkt>LJtSW%hCkJ?uj`aK11a}|}-#a&_py#|Oq#Z*eTHw{#j zA-Ro%!*AM`A7>4;<;Cj(dQnE(=CU^c1-qYJVR{ zTUYTTy^7rCDI@_cnP7mH9;keS9L*?p%P?W3c^S(7!azPAGHqk<*YClQl^qktn|Df_4)0*#7+QDQ;; zixb5|BU1|8&qD;I=OG*(k#4RK$$!V}QXDNLK*6BwiX8HNjt+V9q0EzS`94QaN%EnU zB=55U#)ZE%MX9Z)Y&=}~2C$IQY`gRi<2J5+)6RbOMn~$wokh0|#ewo@CwBYQ<(Hp6 zxsArI&2}R}_2dhPOrLO}WD^l@%HW33!4UlneoZ!Ulzms?xIm#J-3#Nl2WyHhYfAt=T2{~;`d?b+a1N9wky#xw5A zH90N>>iU^HM_o+FBQha3rKQ)%fIS(JN7{(oEz^C}MHw22_aYC{wtt#w_m;&uF3}Eu za*1`wI{AMlM+Zxx)p>t)s@$#~fCv4An zk?*xFc!)5ia)&}>>3?8e7i{Hl9l9Ekr6=RMVJ+=S+riGAic@qjXx0cx|#6p1Q+Rjn(k98lD=;sP&0Q=zTeA zebPBjo?>O2Lwd`S`LtX7?d!xmqT}<*+?{DJqU33Z%#Wov<$o+m*naDvzPhE`0%A$U z_9F-~YdXoe9kuBh+)0*|BxCk(16_=yARyQJMyYlAg~{52CbH^nvg6Te#dqmLjLXzc z3YJJ%vSWaRDI~IRVrvs20qHnucVKQ!-Nar@gaVNrM_mo{9Vxv&EXu-Z|N3S-qrC z7u8g}V0C$yQ%%(@lTfd5=w?4GwE?>S7j1Sf(NgcGE(@e{^`&F1@EKHH@;s;CVx z+GSb`_iF@KaPt6HQYprbU0vqSoB1+JH|aiaZap_az<)D1GdsUw&7;*^B==B^njg>? zn81Q0nA>@bj=R)jd}?>p!`L3X16(&g&f;jnKkwnB{B!f+2lg2n|5M(!|Na;8tbq)r ziwdR^mY1imZ$7|Te;E!D7yMbq_VV-^{-?(MVZZxuPlJvrnMvd_HBR^EZ(HC-A}YW% zZYWYfkbkoFbegqTDY3$)>Vyn6{fOds;|kM}wwAcEx=o8~A+T20v8-yOA~|f9%hTg` z>ORE}ur68hcf9bO9>M)Y_8KD{;AEnCmR2CB^~>?F)Lv~1Ze}jxeVR$i?WeZ8_ZE<+ znIHO69IW5^3Cd+6EZWhpD>(bfW)GwY{*}zViGP?19lBa3w-`7SFWO#*ZS=6jhjsCB zo96yx^@W)Kwt@NlBTn__{{rElf0Yj1Cf@1y-{*NS9Bi$6g%C7DY3l`K?OwL^Ie{ zp6uq9WPrX!`lLd(KkZi@#0A-(5L)oT*A~}a5RF!pf^PHrE`iGpClKFR@&=0+51yUe zY}&KrFX*jQHI`clEUi{r1)R3nmzG;14u4sTqHC&gyvvf3DCh>3L_(OXoBH`};FmO}ELsB@a|bvDHMy z)m^0zczx&jqu&S%rY{mNg!18cfjbE%x4JGsyQ{VZJ#USy2Sh@@p6udb?RKEbkbj%U zM_FgSyjM*8Y8fRSq)Am|@SL>{6mK`@_f(C^gIO+SSiH-V+b@yk6{Nssb>?H5?=*q9 zvhJxL1xH#TO$C0_!8hiRTY6sV%P-NVQd6Jx^|Ux>!P1go3#{x1cAd<3|J**W@6y!| z?9D2TcGK}eo!Eg`$)yF_(_>M5I@OIimI|&Zz(vw6`ud*8Qx-&orl_ktC=fx uz;AC}M`&5~mSokE_#s#O-r`Q}OnPA};`$n`Ga&e_sL1G{dHf%4k=sk1i-bA= diff --git a/previews/PR797/release_notes/index.html b/previews/PR797/release_notes/index.html index 26b8474cf..81ce6cc52 100644 --- a/previews/PR797/release_notes/index.html +++ b/previews/PR797/release_notes/index.html @@ -3,4 +3,4 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Release notes

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

v1.9.0 (October 17, 2024)

Added

Fixed

  • Fixed the tests to skip threading tests if running in serial (#770)
  • Fixed BanditDuality to handle the case where the standard deviation is NaN (#779)
  • Fixed an error when lagged state variables are encountered in MSPFormat (#786)
  • Fixed publication_plot with replications of different lengths (#788)
  • Fixed CTRL+C interrupting the code at unsafe points (#789)

Other

  • Documentation improvements (#771) (#772)
  • Updated printing because of changes in JuMP (#773)

v1.8.1 (August 5, 2024)

Fixed

  • Fixed various issues with SDDP.Threaded() (#761)
  • Fixed a deprecation warning for sorting a dictionary (#763)

Other

  • Updated copyright notices (#762)
  • Updated .JuliaFormatter.toml (#764)

v1.8.0 (July 24, 2024)

Added

  • Added SDDP.Threaded(), which is an experimental parallel scheme that supports solving problems using multiple threads. Some parts of SDDP.jl may not be thread-safe, and this can cause incorrect results, segfaults, or other errors. Please use with care and report any issues by opening a GitHub issue. (#758)

Other

  • Documentation improvements and fixes (#747) (#759)

v1.7.0 (June 4, 2024)

Added

  • Added sample_backward_noise_terms_with_state for creating backward pass sampling schemes that depend on the current primal state. (#742) (Thanks @arthur-brigatto)

Fixed

  • Fixed error message when publication_plot has non-finite data (#738)

Other

  • Updated the logo constructor (#730)

v1.6.7 (February 1, 2024)

Fixed

  • Fixed non-constant state dimension in the MSPFormat reader (#695)
  • Fixed SimulatorSamplingScheme for deterministic nodes (#710)
  • Fixed line search in BFGS (#711)
  • Fixed handling of NEARLY_FEASIBLE_POINT status (#726)

Other

  • Documentation improvements (#692) (#694) (#706) (#716) (#727)
  • Updated to StochOptFormat v1.0 (#705)
  • Added an experimental OuterApproximation algorithm (#709)
  • Updated .gitignore (#717)
  • Added code for MDP paper (#720) (#721)
  • Added Google analytics (#723)

v1.6.6 (September 29, 2023)

Other

v1.6.5 (September 25, 2023)

Fixed

Other

v1.6.4 (September 23, 2023)

Fixed

Other

  • Documentation updates (#658) (#666) (#671)
  • Switch to GitHub action for deploying docs (#668) (#670)
  • Update to Documenter@1 (#669)

v1.6.3 (September 8, 2023)

Fixed

  • Fixed default stopping rule with iteration_limit or time_limit set (#662)

Other

v1.6.2 (August 24, 2023)

Fixed

  • MSPFormat now detect and exploit stagewise independent lattices (#653)
  • Fixed set_optimizer for models read from file (#654)

Other

  • Fixed typo in pglib_opf.jl (#647)
  • Fixed documentation build and added color (#652)

v1.6.1 (July 20, 2023)

Fixed

  • Fixed bugs in MSPFormat reader (#638) (#639)

Other

  • Clarified OutOfSampleMonteCarlo docstring (#643)

v1.6.0 (July 3, 2023)

Added

Other

v1.5.1 (June 30, 2023)

This release contains a number of minor code changes, but it has a large impact on the content that is printed to screen. In particular, we now log periodically, instead of each iteration, and a "good" stopping rule is used as the default if none are specified. Try using SDDP.train(model) to see the difference.

Other

  • Fixed various typos in the documentation (#617)
  • Fixed printing test after changes in JuMP (#618)
  • Set SimulationStoppingRule as the default stopping rule (#619)
  • Changed the default logging frequency. Pass log_every_seconds = 0.0 to train to revert to the old behavior. (#620)
  • Added example usage with Distributions.jl (@slwu89) (#622)
  • Removed the numerical issue @warn (#627)
  • Improved the quality of docstrings (#630)

v1.5.0 (May 14, 2023)

Added

  • Added the ability to use a different model for the forward pass. This is a novel feature that lets you train better policies when the model is non-convex or does not have a well-defined dual. See the Alternative forward models tutorial in which we train convex and non-convex formulations of the optimal power flow problem. (#611)

Other

  • Updated missing changelog entries (#608)
  • Removed global variables (#610)
  • Converted the Options struct to keyword arguments. This struct was a private implementation detail, but the change is breaking if you developed an extension to SDDP that touched these internals. (#612)
  • Fixed some typos (#613)

v1.4.0 (May 8, 2023)

Added

Fixed

  • Fixed parsing of some MSPFormat files (#602) (#604)
  • Fixed printing in header (#605)

v1.3.0 (May 3, 2023)

Added

  • Added experimental support for SDDP.MSPFormat.read_from_file (#593)

Other

  • Updated to StochOptFormat v0.3 (#600)

v1.2.1 (May 1, 2023)

Fixed

  • Fixed log_every_seconds (#597)

v1.2.0 (May 1, 2023)

Added

Other

  • Tweaked how the log is printed (#588)
  • Updated to StochOptFormat v0.2 (#592)

v1.1.4 (April 10, 2023)

Fixed

  • Logs are now flushed every iteration (#584)

Other

  • Added docstrings to various functions (#581)
  • Minor documentation updates (#580)
  • Clarified integrality documentation (#582)
  • Updated the README (#585)
  • Number of numerical issues is now printed to the log (#586)

v1.1.3 (April 2, 2023)

Other

v1.1.2 (March 18, 2023)

Other

v1.1.1 (March 16, 2023)

Other

  • Fixed email in Project.toml
  • Added notebook to documentation tutorials (#571)

v1.1.0 (January 12, 2023)

Added

v1.0.0 (January 3, 2023)

Although we're bumping MAJOR version, this is a non-breaking release. Going forward:

  • New features will bump the MINOR version
  • Bug fixes, maintenance, and documentation updates will bump the PATCH version
  • We will support only the Long Term Support (currently v1.6.7) and the latest patch (currently v1.8.4) releases of Julia. Updates to the LTS version will bump the MINOR version
  • Updates to the compat bounds of package dependencies will bump the PATCH version.

We do not intend any breaking changes to the public API, which would require a new MAJOR release. The public API is everything defined in the documentation. Anything not in the documentation is considered private and may change in any PATCH release.

Added

Other

v0.4.9 (January 3, 2023)

Added

Other

  • Added tutorial on Markov Decision Processes (#556)
  • Added two-stage newsvendor tutorial (#557)
  • Refactored the layout of the documentation (#554) (#555)
  • Updated copyright to 2023 (#558)
  • Fixed errors in the documentation (#561)

v0.4.8 (December 19, 2022)

Added

Fixed

  • Reverted then fixed (#531) because it failed to account for problems with integer variables (#546) (#551)

v0.4.7 (December 17, 2022)

Added

  • Added initial_node support to InSampleMonteCarlo and OutOfSampleMonteCarlo (#535)

Fixed

  • Rethrow InterruptException when solver is interrupted (#534)
  • Fixed numerical recovery when we need dual solutions (#531) (Thanks @bfpc)
  • Fixed re-using the dashboard = true option between solves (#538)
  • Fixed bug when no @stageobjective is set (now defaults to 0.0) (#539)
  • Fixed errors thrown when invalid inputs are provided to add_objective_state (#540)

Other

  • Drop support for Julia versions prior to 1.6 (#533)
  • Updated versions of dependencies (#522) (#533)
  • Switched to HiGHS in the documentation and tests (#533)
  • Added license headers (#519)
  • Fixed link in air conditioning example (#521) (Thanks @conema)
  • Clarified variable naming in deterministic equivalent (#525) (Thanks @lucasprocessi)
  • Added this change log (#536)
  • Cuts are now written to model.cuts.json when numerical instability is discovered. This can aid debugging because it allows to you reload the cuts as of the iteration that caused the numerical issue (#537)

v0.4.6 (March 25, 2022)

Other

  • Updated to JuMP v1.0 (#517)

v0.4.5 (March 9, 2022)

Fixed

  • Fixed issue with set_silent in a subproblem (#510)

Other

v0.4.4 (December 11, 2021)

Added

  • Added BanditDuality (#471)
  • Added benchmark scripts (#475) (#476) (#490)
  • write_cuts_to_file now saves visited states (#468)

Fixed

  • Fixed BoundStalling in a deterministic policy (#470) (#474)
  • Fixed magnitude warning with zero coefficients (#483)

Other

  • Improvements to LagrangianDuality (#481) (#482) (#487)
  • Improvements to StrengthenedConicDuality (#486)
  • Switch to functional form for the tests (#478)
  • Fixed typos (#472) (Thanks @vfdev-5)
  • Update to JuMP v0.22 (#498)

v0.4.3 (August 31, 2021)

Added

  • Added biobjective solver (#462)
  • Added forward_pass_callback (#466)

Other

  • Update tutorials and documentation (#459) (#465)
  • Organize how paper materials are stored (#464)

v0.4.2 (August 24, 2021)

Fixed

  • Fixed a bug in Lagrangian duality (#457)

v0.4.1 (August 23, 2021)

Other

  • Minor changes to our implementation of LagrangianDuality (#454) (#455)

v0.4.0 (August 17, 2021)

Breaking

Other

v0.3.17 (July 6, 2021)

Added

Other

  • Display more model attributes (#438)
  • Documentation improvements (#433) (#437) (#439)

v0.3.16 (June 17, 2021)

Added

Other

  • Update risk measure docstrings (#418)

v0.3.15 (June 1, 2021)

Added

Fixed

Other

  • Add JuliaFormatter (#412)
  • Documentation improvements (#406) (#408)

v0.3.14 (March 30, 2021)

Fixed

  • Fixed O(N^2) behavior in get_same_children (#393)

v0.3.13 (March 27, 2021)

Fixed

  • Fixed bug in print.jl
  • Fixed compat of Reexport (#388)

v0.3.12 (March 22, 2021)

Added

  • Added problem statistics to header (#385) (#386)

Fixed

  • Fixed subtypes in visualization (#384)

v0.3.11 (March 22, 2021)

Fixed

  • Fixed constructor in direct mode (#383)

Other

  • Fix documentation (#379)

v0.3.10 (February 23, 2021)

Fixed

  • Fixed seriescolor in publication plot (#376)

v0.3.9 (February 20, 2021)

Added

  • Add option to simulate with different incoming state (#372)
  • Added warning for cuts with high dynamic range (#373)

Fixed

  • Fixed seriesalpha in publication plot (#375)

v0.3.8 (January 19, 2021)

Other

v0.3.7 (January 8, 2021)

Other

v0.3.6 (December 17, 2020)

Other

  • Fix typos (#358)
  • Collapse navigation bar in docs (#359)
  • Update TagBot.yml (#361)

v0.3.5 (November 18, 2020)

Other

  • Update citations (#348)
  • Switch to GitHub actions (#355)

v0.3.4 (August 25, 2020)

Added

  • Added non-uniform distributionally robust risk measure (#328)
  • Added numerical recovery functions (#330)
  • Added experimental StochOptFormat (#332) (#336) (#337) (#341) (#343) (#344)
  • Added entropic risk measure (#347)

Other

v0.3.3 (June 19, 2020)

Added

  • Added asynchronous support for price and belief states (#325)
  • Added ForwardPass plug-in system (#320)

Fixed

  • Fix check for probabilities in Markovian graph (#322)

v0.3.2 (April 6, 2020)

Added

Other

  • Improve error message in deterministic equivalent (#312)
  • Update to RecipesBase 1.0 (#313)

v0.3.1 (February 26, 2020)

Fixed

  • Fixed filename in integrality_handlers.jl (#304)

v0.3.0 (February 20, 2020)

Breaking

  • Breaking changes to update to JuMP v0.21 (#300).

v0.2.4 (February 7, 2020)

Added

  • Added a counter for the number of total subproblem solves (#301)

Other

  • Update formatter (#298)
  • Added tests (#299)

v0.2.3 (January 24, 2020)

Added

  • Added support for convex risk measures (#294)

Fixed

  • Fixed bug when subproblem is infeasible (#296)
  • Fixed bug in deterministic equivalent (#297)

Other

  • Added example from IJOC paper (#293)

v0.2.2 (January 10, 2020)

Fixed

  • Fixed flakey time limit in tests (#291)

Other

  • Removed MathOptFormat.jl (#289)
  • Update copyright (#290)

v0.2.1 (December 19, 2019)

Added

  • Added support for approximating a Markov lattice (#282) (#285)
  • Add tools for visualizing the value function (#272) (#286)
  • Write .mof.json files on error (#284)

Other

  • Improve documentation (#281) (#283)
  • Update tests for Julia 1.3 (#287)

v0.2.0 (December 16, 2019)

This version added the asynchronous parallel implementation with a few minor breaking changes in how we iterated internally. It didn't break basic user-facing models, only implementations that implemented some of the extension features. It probably could have been a v1.1 release.

Added

  • Added asynchronous parallel implementation (#277)
  • Added roll-out algorithm for cyclic graphs (#279)

Other

  • Improved error messages in PolicyGraph (#271)
  • Added JuliaFormatter (#273) (#276)
  • Fixed compat bounds (#274) (#278)
  • Added documentation for simulating non-standard graphs (#280)

v0.1.0 (October 17, 2019)

A complete rewrite of SDDP.jl based on the policy graph framework. This was essentially a new package. It has minimal code in common with the previous implementation.

Development started on September 28, 2018 in Kokako.jl, and the code was merged into SDDP.jl on March 14, 2019.

The pull request SDDP.jl#180 lists the 29 issues that the rewrite closed.

v0.0.1 (April 18, 2018)

Initial release. Development had been underway since January 22, 2016 in the StochDualDynamicProgram.jl repository. The last development commit there was April 5, 2017. Work then continued in this repository for a year before the first tagged release.

+

Release notes

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

v1.9.0 (October 17, 2024)

Added

Fixed

  • Fixed the tests to skip threading tests if running in serial (#770)
  • Fixed BanditDuality to handle the case where the standard deviation is NaN (#779)
  • Fixed an error when lagged state variables are encountered in MSPFormat (#786)
  • Fixed publication_plot with replications of different lengths (#788)
  • Fixed CTRL+C interrupting the code at unsafe points (#789)

Other

  • Documentation improvements (#771) (#772)
  • Updated printing because of changes in JuMP (#773)

v1.8.1 (August 5, 2024)

Fixed

  • Fixed various issues with SDDP.Threaded() (#761)
  • Fixed a deprecation warning for sorting a dictionary (#763)

Other

  • Updated copyright notices (#762)
  • Updated .JuliaFormatter.toml (#764)

v1.8.0 (July 24, 2024)

Added

  • Added SDDP.Threaded(), which is an experimental parallel scheme that supports solving problems using multiple threads. Some parts of SDDP.jl may not be thread-safe, and this can cause incorrect results, segfaults, or other errors. Please use with care and report any issues by opening a GitHub issue. (#758)

Other

  • Documentation improvements and fixes (#747) (#759)

v1.7.0 (June 4, 2024)

Added

  • Added sample_backward_noise_terms_with_state for creating backward pass sampling schemes that depend on the current primal state. (#742) (Thanks @arthur-brigatto)

Fixed

  • Fixed error message when publication_plot has non-finite data (#738)

Other

  • Updated the logo constructor (#730)

v1.6.7 (February 1, 2024)

Fixed

  • Fixed non-constant state dimension in the MSPFormat reader (#695)
  • Fixed SimulatorSamplingScheme for deterministic nodes (#710)
  • Fixed line search in BFGS (#711)
  • Fixed handling of NEARLY_FEASIBLE_POINT status (#726)

Other

  • Documentation improvements (#692) (#694) (#706) (#716) (#727)
  • Updated to StochOptFormat v1.0 (#705)
  • Added an experimental OuterApproximation algorithm (#709)
  • Updated .gitignore (#717)
  • Added code for MDP paper (#720) (#721)
  • Added Google analytics (#723)

v1.6.6 (September 29, 2023)

Other

v1.6.5 (September 25, 2023)

Fixed

Other

v1.6.4 (September 23, 2023)

Fixed

Other

  • Documentation updates (#658) (#666) (#671)
  • Switch to GitHub action for deploying docs (#668) (#670)
  • Update to Documenter@1 (#669)

v1.6.3 (September 8, 2023)

Fixed

  • Fixed default stopping rule with iteration_limit or time_limit set (#662)

Other

v1.6.2 (August 24, 2023)

Fixed

  • MSPFormat now detect and exploit stagewise independent lattices (#653)
  • Fixed set_optimizer for models read from file (#654)

Other

  • Fixed typo in pglib_opf.jl (#647)
  • Fixed documentation build and added color (#652)

v1.6.1 (July 20, 2023)

Fixed

  • Fixed bugs in MSPFormat reader (#638) (#639)

Other

  • Clarified OutOfSampleMonteCarlo docstring (#643)

v1.6.0 (July 3, 2023)

Added

Other

v1.5.1 (June 30, 2023)

This release contains a number of minor code changes, but it has a large impact on the content that is printed to screen. In particular, we now log periodically, instead of each iteration, and a "good" stopping rule is used as the default if none are specified. Try using SDDP.train(model) to see the difference.

Other

  • Fixed various typos in the documentation (#617)
  • Fixed printing test after changes in JuMP (#618)
  • Set SimulationStoppingRule as the default stopping rule (#619)
  • Changed the default logging frequency. Pass log_every_seconds = 0.0 to train to revert to the old behavior. (#620)
  • Added example usage with Distributions.jl (@slwu89) (#622)
  • Removed the numerical issue @warn (#627)
  • Improved the quality of docstrings (#630)

v1.5.0 (May 14, 2023)

Added

  • Added the ability to use a different model for the forward pass. This is a novel feature that lets you train better policies when the model is non-convex or does not have a well-defined dual. See the Alternative forward models tutorial in which we train convex and non-convex formulations of the optimal power flow problem. (#611)

Other

  • Updated missing changelog entries (#608)
  • Removed global variables (#610)
  • Converted the Options struct to keyword arguments. This struct was a private implementation detail, but the change is breaking if you developed an extension to SDDP that touched these internals. (#612)
  • Fixed some typos (#613)

v1.4.0 (May 8, 2023)

Added

Fixed

  • Fixed parsing of some MSPFormat files (#602) (#604)
  • Fixed printing in header (#605)

v1.3.0 (May 3, 2023)

Added

  • Added experimental support for SDDP.MSPFormat.read_from_file (#593)

Other

  • Updated to StochOptFormat v0.3 (#600)

v1.2.1 (May 1, 2023)

Fixed

  • Fixed log_every_seconds (#597)

v1.2.0 (May 1, 2023)

Added

Other

  • Tweaked how the log is printed (#588)
  • Updated to StochOptFormat v0.2 (#592)

v1.1.4 (April 10, 2023)

Fixed

  • Logs are now flushed every iteration (#584)

Other

  • Added docstrings to various functions (#581)
  • Minor documentation updates (#580)
  • Clarified integrality documentation (#582)
  • Updated the README (#585)
  • Number of numerical issues is now printed to the log (#586)

v1.1.3 (April 2, 2023)

Other

v1.1.2 (March 18, 2023)

Other

v1.1.1 (March 16, 2023)

Other

  • Fixed email in Project.toml
  • Added notebook to documentation tutorials (#571)

v1.1.0 (January 12, 2023)

Added

v1.0.0 (January 3, 2023)

Although we're bumping MAJOR version, this is a non-breaking release. Going forward:

  • New features will bump the MINOR version
  • Bug fixes, maintenance, and documentation updates will bump the PATCH version
  • We will support only the Long Term Support (currently v1.6.7) and the latest patch (currently v1.8.4) releases of Julia. Updates to the LTS version will bump the MINOR version
  • Updates to the compat bounds of package dependencies will bump the PATCH version.

We do not intend any breaking changes to the public API, which would require a new MAJOR release. The public API is everything defined in the documentation. Anything not in the documentation is considered private and may change in any PATCH release.

Added

Other

v0.4.9 (January 3, 2023)

Added

Other

  • Added tutorial on Markov Decision Processes (#556)
  • Added two-stage newsvendor tutorial (#557)
  • Refactored the layout of the documentation (#554) (#555)
  • Updated copyright to 2023 (#558)
  • Fixed errors in the documentation (#561)

v0.4.8 (December 19, 2022)

Added

Fixed

  • Reverted then fixed (#531) because it failed to account for problems with integer variables (#546) (#551)

v0.4.7 (December 17, 2022)

Added

  • Added initial_node support to InSampleMonteCarlo and OutOfSampleMonteCarlo (#535)

Fixed

  • Rethrow InterruptException when solver is interrupted (#534)
  • Fixed numerical recovery when we need dual solutions (#531) (Thanks @bfpc)
  • Fixed re-using the dashboard = true option between solves (#538)
  • Fixed bug when no @stageobjective is set (now defaults to 0.0) (#539)
  • Fixed errors thrown when invalid inputs are provided to add_objective_state (#540)

Other

  • Drop support for Julia versions prior to 1.6 (#533)
  • Updated versions of dependencies (#522) (#533)
  • Switched to HiGHS in the documentation and tests (#533)
  • Added license headers (#519)
  • Fixed link in air conditioning example (#521) (Thanks @conema)
  • Clarified variable naming in deterministic equivalent (#525) (Thanks @lucasprocessi)
  • Added this change log (#536)
  • Cuts are now written to model.cuts.json when numerical instability is discovered. This can aid debugging because it allows to you reload the cuts as of the iteration that caused the numerical issue (#537)

v0.4.6 (March 25, 2022)

Other

  • Updated to JuMP v1.0 (#517)

v0.4.5 (March 9, 2022)

Fixed

  • Fixed issue with set_silent in a subproblem (#510)

Other

v0.4.4 (December 11, 2021)

Added

  • Added BanditDuality (#471)
  • Added benchmark scripts (#475) (#476) (#490)
  • write_cuts_to_file now saves visited states (#468)

Fixed

  • Fixed BoundStalling in a deterministic policy (#470) (#474)
  • Fixed magnitude warning with zero coefficients (#483)

Other

  • Improvements to LagrangianDuality (#481) (#482) (#487)
  • Improvements to StrengthenedConicDuality (#486)
  • Switch to functional form for the tests (#478)
  • Fixed typos (#472) (Thanks @vfdev-5)
  • Update to JuMP v0.22 (#498)

v0.4.3 (August 31, 2021)

Added

  • Added biobjective solver (#462)
  • Added forward_pass_callback (#466)

Other

  • Update tutorials and documentation (#459) (#465)
  • Organize how paper materials are stored (#464)

v0.4.2 (August 24, 2021)

Fixed

  • Fixed a bug in Lagrangian duality (#457)

v0.4.1 (August 23, 2021)

Other

  • Minor changes to our implementation of LagrangianDuality (#454) (#455)

v0.4.0 (August 17, 2021)

Breaking

Other

v0.3.17 (July 6, 2021)

Added

Other

  • Display more model attributes (#438)
  • Documentation improvements (#433) (#437) (#439)

v0.3.16 (June 17, 2021)

Added

Other

  • Update risk measure docstrings (#418)

v0.3.15 (June 1, 2021)

Added

Fixed

Other

  • Add JuliaFormatter (#412)
  • Documentation improvements (#406) (#408)

v0.3.14 (March 30, 2021)

Fixed

  • Fixed O(N^2) behavior in get_same_children (#393)

v0.3.13 (March 27, 2021)

Fixed

  • Fixed bug in print.jl
  • Fixed compat of Reexport (#388)

v0.3.12 (March 22, 2021)

Added

  • Added problem statistics to header (#385) (#386)

Fixed

  • Fixed subtypes in visualization (#384)

v0.3.11 (March 22, 2021)

Fixed

  • Fixed constructor in direct mode (#383)

Other

  • Fix documentation (#379)

v0.3.10 (February 23, 2021)

Fixed

  • Fixed seriescolor in publication plot (#376)

v0.3.9 (February 20, 2021)

Added

  • Add option to simulate with different incoming state (#372)
  • Added warning for cuts with high dynamic range (#373)

Fixed

  • Fixed seriesalpha in publication plot (#375)

v0.3.8 (January 19, 2021)

Other

v0.3.7 (January 8, 2021)

Other

v0.3.6 (December 17, 2020)

Other

  • Fix typos (#358)
  • Collapse navigation bar in docs (#359)
  • Update TagBot.yml (#361)

v0.3.5 (November 18, 2020)

Other

  • Update citations (#348)
  • Switch to GitHub actions (#355)

v0.3.4 (August 25, 2020)

Added

  • Added non-uniform distributionally robust risk measure (#328)
  • Added numerical recovery functions (#330)
  • Added experimental StochOptFormat (#332) (#336) (#337) (#341) (#343) (#344)
  • Added entropic risk measure (#347)

Other

v0.3.3 (June 19, 2020)

Added

  • Added asynchronous support for price and belief states (#325)
  • Added ForwardPass plug-in system (#320)

Fixed

  • Fix check for probabilities in Markovian graph (#322)

v0.3.2 (April 6, 2020)

Added

Other

  • Improve error message in deterministic equivalent (#312)
  • Update to RecipesBase 1.0 (#313)

v0.3.1 (February 26, 2020)

Fixed

  • Fixed filename in integrality_handlers.jl (#304)

v0.3.0 (February 20, 2020)

Breaking

  • Breaking changes to update to JuMP v0.21 (#300).

v0.2.4 (February 7, 2020)

Added

  • Added a counter for the number of total subproblem solves (#301)

Other

  • Update formatter (#298)
  • Added tests (#299)

v0.2.3 (January 24, 2020)

Added

  • Added support for convex risk measures (#294)

Fixed

  • Fixed bug when subproblem is infeasible (#296)
  • Fixed bug in deterministic equivalent (#297)

Other

  • Added example from IJOC paper (#293)

v0.2.2 (January 10, 2020)

Fixed

  • Fixed flakey time limit in tests (#291)

Other

  • Removed MathOptFormat.jl (#289)
  • Update copyright (#290)

v0.2.1 (December 19, 2019)

Added

  • Added support for approximating a Markov lattice (#282) (#285)
  • Add tools for visualizing the value function (#272) (#286)
  • Write .mof.json files on error (#284)

Other

  • Improve documentation (#281) (#283)
  • Update tests for Julia 1.3 (#287)

v0.2.0 (December 16, 2019)

This version added the asynchronous parallel implementation with a few minor breaking changes in how we iterated internally. It didn't break basic user-facing models, only implementations that implemented some of the extension features. It probably could have been a v1.1 release.

Added

  • Added asynchronous parallel implementation (#277)
  • Added roll-out algorithm for cyclic graphs (#279)

Other

  • Improved error messages in PolicyGraph (#271)
  • Added JuliaFormatter (#273) (#276)
  • Fixed compat bounds (#274) (#278)
  • Added documentation for simulating non-standard graphs (#280)

v0.1.0 (October 17, 2019)

A complete rewrite of SDDP.jl based on the policy graph framework. This was essentially a new package. It has minimal code in common with the previous implementation.

Development started on September 28, 2018 in Kokako.jl, and the code was merged into SDDP.jl on March 14, 2019.

The pull request SDDP.jl#180 lists the 29 issues that the rewrite closed.

v0.0.1 (April 18, 2018)

Initial release. Development had been underway since January 22, 2016 in the StochDualDynamicProgram.jl repository. The last development commit there was April 5, 2017. Work then continued in this repository for a year before the first tagged release.

diff --git a/previews/PR797/search_index.js b/previews/PR797/search_index.js index 664864f10..63f08f068 100644 --- a/previews/PR797/search_index.js +++ b/previews/PR797/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"guides/create_a_general_policy_graph/#Create-a-general-policy-graph","page":"Create a general policy graph","title":"Create a general policy graph","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"DocTestSetup = quote\n using SDDP, HiGHS\nend","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"SDDP.jl uses the concept of a policy graph to formulate multistage stochastic programming problems. For more details, read An introduction to SDDP.jl or the paper Dowson, O., (2020). The policy graph decomposition of multistage stochastic optimization problems. Networks, 76(1), 3-23. doi.","category":"page"},{"location":"guides/create_a_general_policy_graph/#Creating-a-[SDDP.Graph](@ref)","page":"Create a general policy graph","title":"Creating a SDDP.Graph","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/#Linear-graphs","page":"Create a general policy graph","title":"Linear graphs","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Linear policy graphs can be created using the SDDP.LinearGraph function.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"julia> graph = SDDP.LinearGraph(3)\nRoot\n 0\nNodes\n 1\n 2\n 3\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 3 w.p. 1.0","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"We can add nodes to a graph using SDDP.add_node and edges using SDDP.add_edge.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"julia> SDDP.add_node(graph, 4)\n\njulia> SDDP.add_edge(graph, 3 => 4, 1.0)\n\njulia> SDDP.add_edge(graph, 4 => 1, 0.9)\n\njulia> graph\nRoot\n 0\nNodes\n 1\n 2\n 3\n 4\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 3 w.p. 1.0\n 3 => 4 w.p. 1.0\n 4 => 1 w.p. 0.9","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Look! We just made a cyclic graph! SDDP.jl can solve infinite horizon problems. The probability on the arc that completes a cycle should be interpreted as a discount factor.","category":"page"},{"location":"guides/create_a_general_policy_graph/#guide_unicyclic_policy_graph","page":"Create a general policy graph","title":"Unicyclic policy graphs","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Linear policy graphs with a single infinite-horizon cycle can be created using the SDDP.UnicyclicGraph function.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"julia> SDDP.UnicyclicGraph(0.95; num_nodes = 2)\nRoot\n 0\nNodes\n 1\n 2\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 1 w.p. 0.95","category":"page"},{"location":"guides/create_a_general_policy_graph/#guide_markovian_policy_graph","page":"Create a general policy graph","title":"Markovian policy graphs","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Markovian policy graphs can be created using the SDDP.MarkovianGraph function.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"julia> SDDP.MarkovianGraph(Matrix{Float64}[[1.0]', [0.4 0.6]])\nRoot\n (0, 1)\nNodes\n (1, 1)\n (2, 1)\n (2, 2)\nArcs\n (0, 1) => (1, 1) w.p. 1.0\n (1, 1) => (2, 1) w.p. 0.4\n (1, 1) => (2, 2) w.p. 0.6","category":"page"},{"location":"guides/create_a_general_policy_graph/#General-graphs","page":"Create a general policy graph","title":"General graphs","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Arbitrarily complicated graphs can be constructed using SDDP.Graph, SDDP.add_node and SDDP.add_edge. For example","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"julia> graph = SDDP.Graph(:root_node)\nRoot\n root_node\nNodes\n {}\nArcs\n {}\n\njulia> SDDP.add_node(graph, :decision_node)\n\njulia> SDDP.add_edge(graph, :root_node => :decision_node, 1.0)\n\njulia> SDDP.add_edge(graph, :decision_node => :decision_node, 0.9)\n\njulia> graph\nRoot\n root_node\nNodes\n decision_node\nArcs\n root_node => decision_node w.p. 1.0\n decision_node => decision_node w.p. 0.9","category":"page"},{"location":"guides/create_a_general_policy_graph/#Creating-a-policy-graph","page":"Create a general policy graph","title":"Creating a policy graph","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Once you have constructed an instance of SDDP.Graph, you can create a policy graph by passing the graph as the first argument.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"julia> graph = SDDP.Graph(\n :root_node,\n [:decision_node],\n [\n (:root_node => :decision_node, 1.0),\n (:decision_node => :decision_node, 0.9)\n ]);\n\njulia> model = SDDP.PolicyGraph(\n graph,\n lower_bound = 0,\n optimizer = HiGHS.Optimizer) do subproblem, node\n println(\"Called from node: \", node)\n end;\nCalled from node: decision_node","category":"page"},{"location":"guides/create_a_general_policy_graph/#Special-cases","page":"Create a general policy graph","title":"Special cases","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"There are two special cases which cover the majority of models in the literature.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"SDDP.LinearPolicyGraph is a special case where a SDDP.LinearGraph is passed as the first argument.\nSDDP.MarkovianPolicyGraph is a special case where a SDDP.MarkovianGraph is passed as the first argument.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Note that the type of the names of all nodes (including the root node) must be the same. In this case, they are Symbols.","category":"page"},{"location":"guides/create_a_general_policy_graph/#Simulating-non-standard-policy-graphs","page":"Create a general policy graph","title":"Simulating non-standard policy graphs","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"If you simulate a policy graph with a node that has outgoing arcs that sum to less than one, you will end up with simulations of different lengths. (The most common case is an infinite horizon stochastic program, aka a linear policy graph with a single cycle.)","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"To simulate a fixed number of stages, use:","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"simulations = SDDP.simulate(\n model,\n 1,\n sampling_scheme = SDDP.InSampleMonteCarlo(\n max_depth = 10,\n terminate_on_dummy_leaf = false\n )\n)","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Here, max_depth controls the number of stages, and terminate_on_dummy_leaf = false stops us from terminating early.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"See also Simulate using a different sampling scheme.","category":"page"},{"location":"guides/create_a_general_policy_graph/#Creating-a-Markovian-graph-automatically","page":"Create a general policy graph","title":"Creating a Markovian graph automatically","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"SDDP.jl can create a Markovian graph by automatically discretizing a one-dimensional stochastic process and fitting a Markov chain.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"To access this functionality, pass a function that takes no arguments and returns a Vector{Float64} to SDDP.MarkovianGraph. To keyword arguments also need to be provided: budget is the total number of nodes in the Markovian graph, and scenarios is the number of realizations of the simulator function used to approximate the graph.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"In some cases, scenarios may be too small to provide a reasonable fit of the stochastic process. If so, SDDP.jl will automatically try to re-fit the Markov chain using more scenarios.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"function simulator()\n scenario = zeros(5)\n for i = 2:5\n scenario[i] = scenario[i - 1] + rand() - 0.5\n end\n return scenario\nend\n\nmodel = SDDP.PolicyGraph(\n SDDP.MarkovianGraph(simulator; budget = 10, scenarios = 100),\n sense = :Max,\n upper_bound = 1e3\n) do subproblem, node\n (stage, price) = node\n @variable(subproblem, x >= 0, SDDP.State, initial_value = 1)\n @constraint(subproblem, x.out <= x.in)\n @stageobjective(subproblem, price * x.out)\nend","category":"page"},{"location":"guides/debug_a_model/#Debug-a-model","page":"Debug a model","title":"Debug a model","text":"","category":"section"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"Building multistage stochastic programming models is hard. There are a lot of different pieces that need to be put together, and we typically have no idea of the optimal policy, so it can be hard (impossible?) to validate the solution.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"That said, here are a few tips to verify and validate models built using SDDP.jl.","category":"page"},{"location":"guides/debug_a_model/#Writing-subproblems-to-file","page":"Debug a model","title":"Writing subproblems to file","text":"","category":"section"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"The first step to debug a model is to write out the subproblems to a file in order to check that you are actually building what you think you are building.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"This can be achieved with the help of two functions: SDDP.parameterize and SDDP.write_subproblem_to_file. The first lets you parameterize a node given a noise, and the second writes out the subproblem to a file.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"Here is an example model:","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"using SDDP, HiGHS\n\nmodel = SDDP.LinearPolicyGraph(\n stages = 2,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, t\n @variable(subproblem, x, SDDP.State, initial_value = 1)\n @variable(subproblem, y)\n @constraint(subproblem, balance, x.in == x.out + y)\n SDDP.parameterize(subproblem, [1.1, 2.2]) do ω\n @stageobjective(subproblem, ω * x.out)\n JuMP.fix(y, ω)\n end\nend\n\n# output\n\nA policy graph with 2 nodes.\n Node indices: 1, 2","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"Initially, model hasn't been parameterized with a concrete realizations of ω. Let's do so now by parameterizing the first subproblem with ω=1.1.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"julia> SDDP.parameterize(model[1], 1.1)","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"Easy! To parameterize the second stage problem, we would have used model[2].","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"Now to write out the problem to a file. We'll get a few warnings because some variables and constraints don't have names. They don't matter, so ignore them.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"julia> SDDP.write_subproblem_to_file(model[1], \"subproblem.lp\")\n\njulia> read(\"subproblem.lp\") |> String |> print\nminimize\nobj: 1.1 x_out + 1 x4\nsubject to\nbalance: 1 x_in - 1 x_out - 1 y = 0\nBounds\nx_in free\nx_out free\ny = 1.1\nx4 >= 0\nEnd","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"It is easy to see that ω has been set in the objective, and as the fixed value for y.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"It is also possible to parameterize the subproblems using values for ω that are not in the original problem formulation.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"julia> SDDP.parameterize(model[1], 3.3)\n\njulia> SDDP.write_subproblem_to_file(model[1], \"subproblem.lp\")\n\njulia> read(\"subproblem.lp\") |> String |> print\nminimize\nobj: 3.3 x_out + 1 x4\nsubject to\nbalance: 1 x_in - 1 x_out - 1 y = 0\nBounds\nx_in free\nx_out free\ny = 3.3\nx4 >= 0\nEnd\n\njulia> rm(\"subproblem.lp\") # Clean up.","category":"page"},{"location":"guides/debug_a_model/#Solve-the-deterministic-equivalent","page":"Debug a model","title":"Solve the deterministic equivalent","text":"","category":"section"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"Sometimes, it can be helpful to solve the deterministic equivalent of a problem in order to obtain an exact solution to the problem. To obtain a JuMP model that represents the deterministic equivalent, use SDDP.deterministic_equivalent. The returned model is just a normal JuMP model. Use JuMP to optimize it and query the solution.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"julia> det_equiv = SDDP.deterministic_equivalent(model, HiGHS.Optimizer)\nA JuMP Model\n├ solver: HiGHS\n├ objective_sense: MIN_SENSE\n│ └ objective_function_type: AffExpr\n├ num_variables: 24\n├ num_constraints: 28\n│ ├ AffExpr in MOI.EqualTo{Float64}: 10\n│ ├ VariableRef in MOI.EqualTo{Float64}: 8\n│ ├ VariableRef in MOI.GreaterThan{Float64}: 6\n│ └ VariableRef in MOI.LessThan{Float64}: 4\n└ Names registered in the model: none\n\njulia> set_silent(det_equiv)\n\njulia> optimize!(det_equiv)\n\njulia> objective_value(det_equiv)\n-5.472500000000001","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"warning: Warning\nThe deterministic equivalent scales poorly with problem size. Only use this on small problems!","category":"page"},{"location":"guides/add_multidimensional_noise/#Add-multi-dimensional-noise-terms","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"","category":"section"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"DocTestSetup = quote\n using SDDP, HiGHS\nend","category":"page"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"Multi-dimensional stagewise-independent random variables can be created by forming the Cartesian product of the random variables.","category":"page"},{"location":"guides/add_multidimensional_noise/#A-simple-example","page":"Add multi-dimensional noise terms","title":"A simple example","text":"","category":"section"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"If the sample space and probabilities are given as vectors for each marginal distribution, do:","category":"page"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"julia> model = SDDP.LinearPolicyGraph(\n stages = 3,\n lower_bound = 0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, t\n @variable(subproblem, x, SDDP.State, initial_value = 0.0)\n Ω = [(value = v, coefficient = c) for v in [1, 2] for c in [3, 4, 5]]\n P = [v * c for v in [0.5, 0.5] for c in [0.3, 0.5, 0.2]]\n SDDP.parameterize(subproblem, Ω, P) do ω\n JuMP.fix(x.out, ω.value)\n @stageobjective(subproblem, ω.coefficient * x.out)\n println(\"ω is: \", ω)\n end\n end;\n\njulia> SDDP.simulate(model, 1);\nω is: (value = 1, coefficient = 4)\nω is: (value = 1, coefficient = 3)\nω is: (value = 2, coefficient = 4)","category":"page"},{"location":"guides/add_multidimensional_noise/#Using-Distributions.jl","page":"Add multi-dimensional noise terms","title":"Using Distributions.jl","text":"","category":"section"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"For sampling multidimensional random variates, it can be useful to use the Product type from Distributions.jl.","category":"page"},{"location":"guides/add_multidimensional_noise/#Finite-discrete-distributions","page":"Add multi-dimensional noise terms","title":"Finite discrete distributions","text":"","category":"section"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"There are several ways to go about this. If the sample space is finite, and small enough that it makes sense to enumerate each element, we can use Base.product and Distributions.support to generate the entire sample space Ω from each of the marginal distributions. ","category":"page"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"We can then evaluate the density function of the product distribution on each element of this space to get the vector of corresponding probabilities, P.","category":"page"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"julia> import Distributions\n\njulia> distributions = [\n Distributions.Binomial(10, 0.5),\n Distributions.Bernoulli(0.5),\n Distributions.truncated(Distributions.Poisson(5), 2, 8)\n ];\n\njulia> supports = Distributions.support.(distributions);\n\njulia> Ω = vec([collect(ω) for ω in Base.product(supports...)]);\n\njulia> P = [Distributions.pdf(Distributions.Product(distributions), ω) for ω in Ω];\n\njulia> model = SDDP.LinearPolicyGraph(\n stages = 3,\n lower_bound = 0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, t\n @variable(subproblem, x, SDDP.State, initial_value = 0.0)\n SDDP.parameterize(subproblem, Ω, P) do ω\n JuMP.fix(x.out, ω[1])\n @stageobjective(subproblem, ω[2] * x.out + ω[3])\n println(\"ω is: \", ω)\n end\n end;\n\njulia> SDDP.simulate(model, 1);\nω is: [10, 0, 3]\nω is: [0, 1, 6]\nω is: [6, 0, 5]","category":"page"},{"location":"guides/add_multidimensional_noise/#Sampling","page":"Add multi-dimensional noise terms","title":"Sampling","text":"","category":"section"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"For sample spaces that are too large to explicitly represent, we can instead approximate the distribution by a sample of N points. Now Ω is a sample from the full sample space, and P is the uniform distribution over those points. Points with higher density in the full sample space will appear more frequently in Ω.","category":"page"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"julia> import Distributions\n\njulia> distributions = Distributions.Product([\n Distributions.Binomial(100, 0.5),\n Distributions.Geometric(1 / 20),\n Distributions.Poisson(20),\n ]);\n\njulia> N = 100;\n\njulia> Ω = [rand(distributions) for _ in 1:N];\n\njulia> P = fill(1 / N, N);\n\njulia> model = SDDP.LinearPolicyGraph(\n stages = 3,\n lower_bound = 0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, t\n @variable(subproblem, x, SDDP.State, initial_value = 0.0)\n SDDP.parameterize(subproblem, Ω, P) do ω\n JuMP.fix(x.out, ω[1])\n @stageobjective(subproblem, ω[2] * x.out + ω[3])\n println(\"ω is: \", ω)\n end\n end;\n\njulia> SDDP.simulate(model, 1);\nω is: [54, 38, 19]\nω is: [43, 3, 13]\nω is: [43, 4, 17]","category":"page"},{"location":"examples/booking_management/","page":"Booking management","title":"Booking management","text":"EditURL = \"booking_management.jl\"","category":"page"},{"location":"examples/booking_management/#Booking-management","page":"Booking management","title":"Booking management","text":"","category":"section"},{"location":"examples/booking_management/","page":"Booking management","title":"Booking management","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/booking_management/","page":"Booking management","title":"Booking management","text":"This example concerns the acceptance of booking requests for rooms in a hotel in the lead up to a large event.","category":"page"},{"location":"examples/booking_management/","page":"Booking management","title":"Booking management","text":"Each stage, we receive a booking request and can choose to accept or decline it. Once accepted, bookings cannot be terminated.","category":"page"},{"location":"examples/booking_management/","page":"Booking management","title":"Booking management","text":"using SDDP, HiGHS, Test\n\nfunction booking_management_model(num_days, num_rooms, num_requests)\n # maximum revenue that could be accrued.\n max_revenue = (num_rooms + num_requests) * num_days * num_rooms\n # booking_requests is a vector of {0,1} arrays of size\n # (num_days x num_rooms) if the room is requested.\n booking_requests = Array{Int,2}[]\n for room in 1:num_rooms\n for day in 1:num_days\n # note: length_of_stay is 0 indexed to avoid unnecessary +/- 1\n # on the indexing\n for length_of_stay in 0:(num_days-day)\n req = zeros(Int, (num_rooms, num_days))\n req[room:room, day.+(0:length_of_stay)] .= 1\n push!(booking_requests, req)\n end\n end\n end\n\n return model = SDDP.LinearPolicyGraph(;\n stages = num_requests,\n upper_bound = max_revenue,\n sense = :Max,\n optimizer = HiGHS.Optimizer,\n ) do sp, stage\n @variable(\n sp,\n 0 <= vacancy[room = 1:num_rooms, day = 1:num_days] <= 1,\n SDDP.State,\n Bin,\n initial_value = 1\n )\n @variables(\n sp,\n begin\n # Accept request for booking of room for length of time.\n 0 <= accept_request <= 1, Bin\n # Accept a booking for an individual room on an individual day.\n 0 <= room_request_accepted[1:num_rooms, 1:num_days] <= 1, Bin\n # Helper for JuMP.fix\n req[1:num_rooms, 1:num_days]\n end\n )\n for room in 1:num_rooms, day in 1:num_days\n @constraints(\n sp,\n begin\n # Update vacancy if we accept a room request\n vacancy[room, day].out ==\n vacancy[room, day].in - room_request_accepted[room, day]\n # Can't accept a request of a filled room\n room_request_accepted[room, day] <= vacancy[room, day].in\n # Can't accept invididual room request if entire request is declined\n room_request_accepted[room, day] <= accept_request\n # Can't accept request if room not requested\n room_request_accepted[room, day] <= req[room, day]\n # Accept all individual rooms is entire request is accepted\n room_request_accepted[room, day] + (1 - accept_request) >= req[room, day]\n end\n )\n end\n SDDP.parameterize(sp, booking_requests) do request\n return JuMP.fix.(req, request)\n end\n @stageobjective(\n sp,\n sum(\n (room + stage - 1) * room_request_accepted[room, day] for\n room in 1:num_rooms for day in 1:num_days\n )\n )\n end\nend\n\nfunction booking_management(duality_handler)\n m_1_2_5 = booking_management_model(1, 2, 5)\n SDDP.train(m_1_2_5; log_frequency = 5, duality_handler = duality_handler)\n if duality_handler == SDDP.ContinuousConicDuality()\n @test SDDP.calculate_bound(m_1_2_5) >= 7.25 - 1e-4\n else\n @test isapprox(SDDP.calculate_bound(m_1_2_5), 7.25, atol = 0.02)\n end\n\n m_2_2_3 = booking_management_model(2, 2, 3)\n SDDP.train(m_2_2_3; log_frequency = 10, duality_handler = duality_handler)\n if duality_handler == SDDP.ContinuousConicDuality()\n @test SDDP.calculate_bound(m_1_2_5) > 6.13\n else\n @test isapprox(SDDP.calculate_bound(m_2_2_3), 6.13, atol = 0.02)\n end\nend\n\nbooking_management(SDDP.ContinuousConicDuality())","category":"page"},{"location":"examples/booking_management/","page":"Booking management","title":"Booking management","text":"New version of HiGHS stalls booking_management(SDDP.LagrangianDuality())","category":"page"},{"location":"examples/no_strong_duality/","page":"No strong duality","title":"No strong duality","text":"EditURL = \"no_strong_duality.jl\"","category":"page"},{"location":"examples/no_strong_duality/#No-strong-duality","page":"No strong duality","title":"No strong duality","text":"","category":"section"},{"location":"examples/no_strong_duality/","page":"No strong duality","title":"No strong duality","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/no_strong_duality/","page":"No strong duality","title":"No strong duality","text":"This example is interesting, because strong duality doesn't hold for the extensive form (see if you can show why!), but we still converge.","category":"page"},{"location":"examples/no_strong_duality/","page":"No strong duality","title":"No strong duality","text":"using SDDP, HiGHS, Test\n\nfunction no_strong_duality()\n model = SDDP.PolicyGraph(\n SDDP.Graph(\n :root,\n [:node],\n [(:root => :node, 1.0), (:node => :node, 0.5)],\n );\n optimizer = HiGHS.Optimizer,\n lower_bound = 0.0,\n ) do sp, t\n @variable(sp, x, SDDP.State, initial_value = 1.0)\n @stageobjective(sp, x.out)\n @constraint(sp, x.in == x.out)\n end\n SDDP.train(model)\n @test SDDP.calculate_bound(model) ≈ 2.0 atol = 1e-5\n return\nend\n\nno_strong_duality()","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"CurrentModule = SDDP","category":"page"},{"location":"guides/add_integrality/#Integrality","page":"Integrality","title":"Integrality","text":"","category":"section"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"There's nothing special about binary and integer variables in SDDP.jl. Your models may contain a mix of binary, integer, or continuous state and control variables. Use the standard JuMP syntax to add binary or integer variables.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"For example:","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"using SDDP, HiGHS\nmodel = SDDP.LinearPolicyGraph(\n stages = 3,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(sp, 0 <= x <= 100, Int, SDDP.State, initial_value = 0)\n @variable(sp, 0 <= u <= 200, integer = true)\n @variable(sp, v >= 0)\n @constraint(sp, x.out == x.in + u + v - 150)\n @stageobjective(sp, 2u + 6v + x.out)\nend","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"If you want finer control over how SDDP.jl computes subgradients in the backward pass, you can pass an SDDP.AbstractDualityHandler to the duality_handler argument of SDDP.train.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"See Duality handlers for the list of handlers you can pass.","category":"page"},{"location":"guides/add_integrality/#Convergence","page":"Integrality","title":"Convergence","text":"","category":"section"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"SDDP.jl cannot guarantee that it will find a globally optimal policy when some of the variables are discrete. However, in most cases we find that it can still find an integer feasible policy that performs well in simulation.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"Moreover, when the number of nodes in the graph is large, or there is uncertainty, we are not aware of another algorithm that can claim to find a globally optimal policy.","category":"page"},{"location":"guides/add_integrality/#Does-SDDP.jl-implement-the-SDDiP-algorithm?","page":"Integrality","title":"Does SDDP.jl implement the SDDiP algorithm?","text":"","category":"section"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"Most discussions of SDDiP in the literature confuse two unrelated things.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"First, how to compute dual variables\nSecond, when the algorithm will converge to a globally optimal policy.","category":"page"},{"location":"guides/add_integrality/#Computing-dual-variables","page":"Integrality","title":"Computing dual variables","text":"","category":"section"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"The stochastic dual dynamic programming algorithm requires a subgradient of the objective with respect to the incoming state variable. ","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"One way to obtain a valid subgradient is to compute an optimal value of the dual variable lambda in the following subproblem:","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"beginaligned\nV_i(x omega) = minlimits_barx x^prime u C_i(barx u omega) + mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi)\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x quad lambda\nendaligned","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"The easiest option is to relax integrality of the discrete variables to form a linear program and then use linear programming duality to obtain the dual. But we could also use Lagrangian duality without needing to relax the integrality constraints.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"To compute the Lagrangian dual lambda, we penalize lambda^top(barx - x) in the objective instead of enforcing the constraint:","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"beginaligned\nmaxlimits_lambdaminlimits_barx x^prime u C_i(barx u omega) + mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi) - lambda^top(barx - x)\n x^prime = T_i(barx u omega) \n u in U_i(barx omega)\nendaligned","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"You can use Lagrangian duality in SDDP.jl by passing SDDP.LagrangianDuality to the duality_handler argument of SDDP.train.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"Compared with linear programming duality, the Lagrangian problem is difficult to solve because it requires the solution of many mixed-integer programs instead of a single linear program. This is one reason why \"SDDiP\" has poor performance.","category":"page"},{"location":"guides/add_integrality/#Convergence-2","page":"Integrality","title":"Convergence","text":"","category":"section"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"The second part to SDDiP is a very tightly scoped claim: if all of the state variables are binary and the algorithm uses Lagrangian duality to compute a subgradient, then it will converge to an optimal policy.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"In many cases, papers claim to \"do SDDiP,\" but they have state variables which are not binary. In these cases, the algorithm is not guaranteed to converge to a globally optimal policy.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"One work-around that has been suggested is to discretize the state variables into a set of binary state variables. However, this leads to a large number of binary state variables, which is another reason why \"SDDiP\" has poor performance.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"In general, we recommend that you introduce integer variables into your model without fear of the consequences, and that you treat the resulting policy as a good heuristic, rather than an attempt to find a globally optimal policy.","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_2stages/","page":"StructDualDynProg: Problem 5.2, 2 stages","title":"StructDualDynProg: Problem 5.2, 2 stages","text":"EditURL = \"StructDualDynProg.jl_prob5.2_2stages.jl\"","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_2stages/#StructDualDynProg:-Problem-5.2,-2-stages","page":"StructDualDynProg: Problem 5.2, 2 stages","title":"StructDualDynProg: Problem 5.2, 2 stages","text":"","category":"section"},{"location":"examples/StructDualDynProg.jl_prob5.2_2stages/","page":"StructDualDynProg: Problem 5.2, 2 stages","title":"StructDualDynProg: Problem 5.2, 2 stages","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_2stages/","page":"StructDualDynProg: Problem 5.2, 2 stages","title":"StructDualDynProg: Problem 5.2, 2 stages","text":"This example comes from StochasticDualDynamicProgramming.jl","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_2stages/","page":"StructDualDynProg: Problem 5.2, 2 stages","title":"StructDualDynProg: Problem 5.2, 2 stages","text":"using SDDP, HiGHS, Test\n\nfunction test_prob52_2stages()\n model = SDDP.LinearPolicyGraph(;\n stages = 2,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, stage\n # ========== Problem data ==========\n n = 4\n m = 3\n i_c = [16, 5, 32, 2]\n C = [25, 80, 6.5, 160]\n T = [8760, 7000, 1500] / 8760\n D2 = [diff([0, 3919, 7329, 10315]) diff([0, 7086, 9004, 11169])]\n p2 = [0.9, 0.1]\n # ========== State Variables ==========\n @variable(subproblem, x[i = 1:n] >= 0, SDDP.State, initial_value = 0.0)\n # ========== Variables ==========\n @variables(subproblem, begin\n y[1:n, 1:m] >= 0\n v[1:n] >= 0\n penalty >= 0\n rhs_noise[1:m] # Dummy variable for RHS noise term.\n end)\n # ========== Constraints ==========\n @constraints(\n subproblem,\n begin\n [i = 1:n], x[i].out == x[i].in + v[i]\n [i = 1:n], sum(y[i, :]) <= x[i].in\n [j = 1:m], sum(y[:, j]) + penalty >= rhs_noise[j]\n end\n )\n if stage == 2\n # No investment in last stage.\n @constraint(subproblem, sum(v) == 0)\n end\n # ========== Uncertainty ==========\n if stage != 1 # no uncertainty in first stage\n SDDP.parameterize(subproblem, 1:size(D2, 2), p2) do ω\n for j in 1:m\n JuMP.fix(rhs_noise[j], D2[j, ω])\n end\n end\n end\n # ========== Stage objective ==========\n @stageobjective(subproblem, i_c' * v + C' * y * T + 1e6 * penalty)\n return\n end\n SDDP.train(model; log_frequency = 10)\n @test SDDP.calculate_bound(model) ≈ 340315.52 atol = 0.1\n return\nend\n\ntest_prob52_2stages()","category":"page"},{"location":"examples/stochastic_all_blacks/","page":"Stochastic All Blacks","title":"Stochastic All Blacks","text":"EditURL = \"stochastic_all_blacks.jl\"","category":"page"},{"location":"examples/stochastic_all_blacks/#Stochastic-All-Blacks","page":"Stochastic All Blacks","title":"Stochastic All Blacks","text":"","category":"section"},{"location":"examples/stochastic_all_blacks/","page":"Stochastic All Blacks","title":"Stochastic All Blacks","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/stochastic_all_blacks/","page":"Stochastic All Blacks","title":"Stochastic All Blacks","text":"using SDDP, HiGHS, Test\n\nfunction stochastic_all_blacks()\n # Number of time periods\n T = 3\n # Number of seats\n N = 2\n # R_ij = price of seat i at time j\n R = [3 3 6; 3 3 6]\n # Number of noises\n s = 3\n offers = [\n [[1, 1], [0, 0], [1, 1]],\n [[1, 0], [0, 0], [0, 0]],\n [[0, 1], [1, 0], [1, 1]],\n ]\n\n model = SDDP.LinearPolicyGraph(;\n stages = T,\n sense = :Max,\n upper_bound = 100.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, stage\n # Seat remaining?\n @variable(sp, 0 <= x[1:N] <= 1, SDDP.State, Bin, initial_value = 1)\n # Action: accept offer, or don't accept offer\n # We are allowed to accept some of the seats offered but not others\n @variable(sp, accept_offer[1:N], Bin)\n @variable(sp, offers_made[1:N])\n # Balance on seats\n @constraint(\n sp,\n balance[i in 1:N],\n x[i].in - x[i].out == accept_offer[i]\n )\n @stageobjective(sp, sum(R[i, stage] * accept_offer[i] for i in 1:N))\n SDDP.parameterize(sp, offers[stage]) do o\n return JuMP.fix.(offers_made, o)\n end\n @constraint(sp, accept_offer .<= offers_made)\n end\n\n SDDP.train(model; duality_handler = SDDP.LagrangianDuality())\n @test SDDP.calculate_bound(model) ≈ 8.0\n return\nend\n\nstochastic_all_blacks()","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"EditURL = \"example_milk_producer.jl\"","category":"page"},{"location":"tutorial/example_milk_producer/#Example:-the-milk-producer","page":"Example: the milk producer","title":"Example: the milk producer","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The purpose of this tutorial is to demonstrate how to fit a Markovian policy graph to a univariate stochastic process.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"This tutorial uses the following packages:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"using SDDP\nimport HiGHS\nimport Plots","category":"page"},{"location":"tutorial/example_milk_producer/#Background","page":"Example: the milk producer","title":"Background","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"A company produces milk for sale on a spot market each month. The quantity of milk they produce is uncertain, and so too is the price on the spot market. The company can store unsold milk in a stockpile of dried milk powder.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The spot price is determined by an auction system, and so varies from month to month, but demonstrates serial correlation. In each auction, there is sufficient demand that the milk producer finds a buyer for all their milk, regardless of the quantity they supply. Furthermore, the spot price is independent of the milk producer (they are a small player in the market).","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The spot price is highly volatile, and is the result of a process that is out of the control of the company. To counteract their price risk, the company engages in a forward contracting programme.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The forward contracting programme is a deal for physical milk four months in the future.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The futures price is the current spot price, plus some forward contango (the buyers gain certainty that they will receive the milk in the future).","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"In general, the milk company should forward contract (since they reduce their price risk), however they also have production risk. Therefore, it may be the case that they forward contract a fixed amount, but find that they do not produce enough milk to meet the fixed demand. They are then forced to buy additional milk on the spot market.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The goal of the milk company is to choose the extent to which they forward contract in order to maximise (risk-adjusted) revenues, whilst managing their production risk.","category":"page"},{"location":"tutorial/example_milk_producer/#A-stochastic-process-for-price","page":"Example: the milk producer","title":"A stochastic process for price","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"It is outside the scope of this tutorial, but assume that we have gone away and analysed historical data to fit a stochastic process to the sequence of monthly auction spot prices.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"One plausible model is a multiplicative auto-regressive model of order one, where the white noise term is modeled by a finite distribution of empirical residuals. We can simulate this stochastic process as follows:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"function simulator()\n residuals = [0.0987, 0.199, 0.303, 0.412, 0.530, 0.661, 0.814, 1.010, 1.290]\n residuals = 0.1 * vcat(-residuals, 0.0, residuals)\n scenario = zeros(12)\n y, μ, α = 4.5, 6.0, 0.05\n for t in 1:12\n y = exp((1 - α) * log(y) + α * log(μ) + rand(residuals))\n scenario[t] = clamp(y, 3.0, 9.0)\n end\n return scenario\nend\n\nsimulator()","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"It may be helpful to visualize a number of simulations of the price process:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"plot = Plots.plot(\n [simulator() for _ in 1:500];\n color = \"gray\",\n opacity = 0.2,\n legend = false,\n xlabel = \"Month\",\n ylabel = \"Price [\\$/kg]\",\n xlims = (1, 12),\n ylims = (3, 9),\n)","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The prices gradually revert to the mean of $6/kg, and there is high volatility.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"We can't incorporate this price process directly into SDDP.jl, but we can fit a SDDP.MarkovianGraph directly from the simulator:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"graph = SDDP.MarkovianGraph(simulator; budget = 30, scenarios = 10_000);\nnothing # hide","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"Here budget is the number of nodes in the policy graph, and scenarios is the number of simulations to use when estimating the transition probabilities.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The graph contains too many nodes to be show, but we can plot it:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"for ((t, price), edges) in graph.nodes\n for ((t′, price′), probability) in edges\n Plots.plot!(\n plot,\n [t, t′],\n [price, price′];\n color = \"red\",\n width = 3 * probability,\n )\n end\nend\n\nplot","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"That looks okay. Try changing budget and scenarios to see how different Markovian policy graphs can be created.","category":"page"},{"location":"tutorial/example_milk_producer/#Model","page":"Example: the milk producer","title":"Model","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"Now that we have a Markovian graph, we can build the model. See if you can work out how we arrived at this formulation by reading the background description. Do all the variables and constraints make sense?","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"model = SDDP.PolicyGraph(\n graph;\n sense = :Max,\n upper_bound = 1e2,\n optimizer = HiGHS.Optimizer,\n) do sp, node\n # Decompose the node into the month (::Int) and spot price (::Float64)\n t, price = node::Tuple{Int,Float64}\n # Transactions on the futures market cost 0.01\n c_transaction = 0.01\n # It costs the company +50% to buy milk on the spot market and deliver to\n # their customers\n c_buy_premium = 1.5\n # Buyer is willing to pay +5% for certainty\n c_contango = 1.05\n # Distribution of production\n Ω_production = range(0.1, 0.2; length = 5)\n c_max_production = 12 * maximum(Ω_production)\n # x_stock: quantity of milk in stock pile\n @variable(sp, 0 <= x_stock, SDDP.State, initial_value = 0)\n # x_forward[i]: quantity of milk for delivery in i months\n @variable(sp, 0 <= x_forward[1:4], SDDP.State, initial_value = 0)\n # u_spot_sell: quantity of milk to sell on spot market\n @variable(sp, 0 <= u_spot_sell <= c_max_production)\n # u_spot_buy: quantity of milk to buy on spot market\n @variable(sp, 0 <= u_spot_buy <= c_max_production)\n # u_spot_buy: quantity of milk to sell on futures market\n c_max_futures = t <= 8 ? c_max_production : 0.0\n @variable(sp, 0 <= u_forward_sell <= c_max_futures)\n # ω_production: production random variable\n @variable(sp, ω_production)\n # Forward contracting constraints:\n @constraint(sp, [i in 1:3], x_forward[i].out == x_forward[i+1].in)\n @constraint(sp, x_forward[4].out == u_forward_sell)\n # Stockpile balance constraint\n @constraint(\n sp,\n x_stock.out ==\n x_stock.in + ω_production + u_spot_buy - x_forward[1].in - u_spot_sell\n )\n # The random variables. `price` comes from the Markov node\n #\n # !!! warning\n # The elements in Ω MUST be a tuple with 1 or 2 values, where the first\n # value is `price` and the second value is the random variable for the\n # current node. If the node is deterministic, use Ω = [(price,)].\n Ω = [(price, p) for p in Ω_production]\n SDDP.parameterize(sp, Ω) do ω\n # Fix the ω_production variable\n fix(ω_production, ω[2])\n @stageobjective(\n sp,\n # Sales on spot market\n ω[1] * (u_spot_sell - c_buy_premium * u_spot_buy) +\n # Sales on futures smarket\n (ω[1] * c_contango - c_transaction) * u_forward_sell\n )\n return\n end\n return\nend","category":"page"},{"location":"tutorial/example_milk_producer/#Training-a-policy","page":"Example: the milk producer","title":"Training a policy","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"Now we have a model, we train a policy. The SDDP.SimulatorSamplingScheme is used in the forward pass. It generates an out-of-sample sequence of prices using simulator and traverses the closest sequence of nodes in the policy graph. When calling SDDP.parameterize for each subproblem, it uses the new out-of-sample price instead of the price associated with the Markov node.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"SDDP.train(\n model;\n time_limit = 20,\n risk_measure = 0.5 * SDDP.Expectation() + 0.5 * SDDP.AVaR(0.25),\n sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),\n)","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"warning: Warning\nWe're intentionally terminating the training early so that the documentation doesn't take too long to build. If you run this example locally, increase the time limit.","category":"page"},{"location":"tutorial/example_milk_producer/#Simulating-the-policy","page":"Example: the milk producer","title":"Simulating the policy","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"When simulating the policy, we can also use the SDDP.SimulatorSamplingScheme.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"simulations = SDDP.simulate(\n model,\n 200,\n Symbol[:x_stock, :u_forward_sell, :u_spot_sell, :u_spot_buy];\n sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),\n);\nnothing # hide","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"To show how the sampling scheme uses the new out-of-sample price instead of the price associated with the Markov node, compare the index of the Markov state visited in stage 12 of the first simulation:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"simulations[1][12][:node_index]","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"to the realization of the noise (price, ω) passed to SDDP.parameterize:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"simulations[1][12][:noise_term]","category":"page"},{"location":"tutorial/example_milk_producer/#Visualizing-the-policy","page":"Example: the milk producer","title":"Visualizing the policy","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"Finally, we can plot the policy to gain insight (although note that we terminated the training early, so we should run the re-train the policy for more iterations before making too many judgements).","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"plot = Plots.plot(\n SDDP.publication_plot(simulations; title = \"x_stock.out\") do data\n return data[:x_stock].out\n end,\n SDDP.publication_plot(simulations; title = \"u_forward_sell\") do data\n return data[:u_forward_sell]\n end,\n SDDP.publication_plot(simulations; title = \"u_spot_buy\") do data\n return data[:u_spot_buy]\n end,\n SDDP.publication_plot(simulations; title = \"u_spot_sell\") do data\n return data[:u_spot_sell]\n end;\n layout = (2, 2),\n)","category":"page"},{"location":"tutorial/example_milk_producer/#Next-steps","page":"Example: the milk producer","title":"Next steps","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"Train the policy for longer. What do you observe?\nTry creating different Markovian graphs. What happens if you add more nodes?\nTry different risk measures","category":"page"},{"location":"examples/FAST_production_management/","page":"FAST: the production management problem","title":"FAST: the production management problem","text":"EditURL = \"FAST_production_management.jl\"","category":"page"},{"location":"examples/FAST_production_management/#FAST:-the-production-management-problem","page":"FAST: the production management problem","title":"FAST: the production management problem","text":"","category":"section"},{"location":"examples/FAST_production_management/","page":"FAST: the production management problem","title":"FAST: the production management problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/FAST_production_management/","page":"FAST: the production management problem","title":"FAST: the production management problem","text":"An implementation of the Production Management example from FAST","category":"page"},{"location":"examples/FAST_production_management/","page":"FAST: the production management problem","title":"FAST: the production management problem","text":"using SDDP, HiGHS, Test\n\nfunction fast_production_management(; cut_type)\n DEMAND = [2, 10]\n H = 3\n N = 2\n C = [0.2, 0.7]\n S = 2 .+ [0.33, 0.54]\n model = SDDP.LinearPolicyGraph(;\n stages = H,\n lower_bound = -50.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n @variable(sp, x[1:N] >= 0, SDDP.State, initial_value = 0.0)\n @variables(sp, begin\n s[i = 1:N] >= 0\n d\n end)\n @constraints(sp, begin\n [i = 1:N], s[i] <= x[i].in\n sum(s) <= d\n end)\n SDDP.parameterize(sp, t == 1 ? [0] : DEMAND) do ω\n return JuMP.fix(d, ω)\n end\n @stageobjective(sp, sum(C[i] * x[i].out for i in 1:N) - S's)\n end\n SDDP.train(model; cut_type = cut_type, print_level = 2, log_frequency = 5)\n @test SDDP.calculate_bound(model) ≈ -23.96 atol = 1e-2\nend\n\nfast_production_management(; cut_type = SDDP.SINGLE_CUT)\nfast_production_management(; cut_type = SDDP.MULTI_CUT)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"EditURL = \"example_reservoir.jl\"","category":"page"},{"location":"tutorial/example_reservoir/#Example:-deterministic-to-stochastic","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"The purpose of this tutorial is to explain how we can go from a deterministic time-staged optimal control model in JuMP to a multistage stochastic optimization model in SDDP.jl. As a motivating problem, we consider the hydro-thermal problem with a single reservoir.","category":"page"},{"location":"tutorial/example_reservoir/#Packages","page":"Example: deterministic to stochastic","title":"Packages","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"This tutorial requires the following packages:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"using JuMP\nusing SDDP\nimport CSV\nimport DataFrames\nimport HiGHS\nimport Plots","category":"page"},{"location":"tutorial/example_reservoir/#Data","page":"Example: deterministic to stochastic","title":"Data","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"First, we need some data for the problem. For this tutorial, we'll write CSV files to a temporary directory from Julia. If you have an existing file, you could change the filename to point to that instead.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"dir = mktempdir()\nfilename = joinpath(dir, \"example_reservoir.csv\")","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Here is the data","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"csv_data = \"\"\"\nweek,inflow,demand,cost\n1,3,7,10.2\\n2,2,7.1,10.4\\n3,3,7.2,10.6\\n4,2,7.3,10.9\\n5,3,7.4,11.2\\n\n6,2,7.6,11.5\\n7,3,7.8,11.9\\n8,2,8.1,12.3\\n9,3,8.3,12.7\\n10,2,8.6,13.1\\n\n11,3,8.9,13.6\\n12,2,9.2,14\\n13,3,9.5,14.5\\n14,2,9.8,14.9\\n15,3,10.1,15.3\\n\n16,2,10.4,15.8\\n17,3,10.7,16.2\\n18,2,10.9,16.6\\n19,3,11.2,17\\n20,3,11.4,17.4\\n\n21,3,11.6,17.7\\n22,2,11.7,18\\n23,3,11.8,18.3\\n24,2,11.9,18.5\\n25,3,12,18.7\\n\n26,2,12,18.9\\n27,3,12,19\\n28,2,11.9,19.1\\n29,3,11.8,19.2\\n30,2,11.7,19.2\\n\n31,3,11.6,19.2\\n32,2,11.4,19.2\\n33,3,11.2,19.1\\n34,2,10.9,19\\n35,3,10.7,18.9\\n\n36,2,10.4,18.8\\n37,3,10.1,18.6\\n38,2,9.8,18.5\\n39,3,9.5,18.4\\n40,3,9.2,18.2\\n\n41,2,8.9,18.1\\n42,3,8.6,17.9\\n43,2,8.3,17.8\\n44,3,8.1,17.7\\n45,2,7.8,17.6\\n\n46,3,7.6,17.5\\n47,2,7.4,17.5\\n48,3,7.3,17.5\\n49,2,7.2,17.5\\n50,3,7.1,17.6\\n\n51,3,7,17.7\\n52,3,7,17.8\\n\n\"\"\"\nwrite(filename, csv_data);\nnothing #hide","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"And here we read it into a DataFrame:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"data = CSV.read(filename, DataFrames.DataFrame)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"It's easier to visualize the data if we plot it:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Plots.plot(\n Plots.plot(data[!, :inflow]; ylabel = \"Inflow\"),\n Plots.plot(data[!, :demand]; ylabel = \"Demand\"),\n Plots.plot(data[!, :cost]; ylabel = \"Cost\", xlabel = \"Week\");\n layout = (3, 1),\n legend = false,\n)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"The number of weeks will be useful later:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"T = size(data, 1)","category":"page"},{"location":"tutorial/example_reservoir/#Deterministic-JuMP-model","page":"Example: deterministic to stochastic","title":"Deterministic JuMP model","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"To start, we construct a deterministic model in pure JuMP.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Create a JuMP model, using HiGHS as the optimizer:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"model = Model(HiGHS.Optimizer)\nset_silent(model)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"x_storage[t]: the amount of water in the reservoir at the start of stage t:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"reservoir_max = 320.0\n@variable(model, 0 <= x_storage[1:T+1] <= reservoir_max)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"We need an initial condition for x_storage[1]. Fix it to 300 units:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"reservoir_initial = 300\nfix(x_storage[1], reservoir_initial; force = true)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"u_flow[t]: the amount of water to flow through the turbine in stage t:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"flow_max = 12\n@variable(model, 0 <= u_flow[1:T] <= flow_max)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"u_spill[t]: the amount of water to spill from the reservoir in stage t, bypassing the turbine:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"@variable(model, 0 <= u_spill[1:T])","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"u_thermal[t]: the amount of thermal generation in stage t:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"@variable(model, 0 <= u_thermal[1:T])","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"ω_inflow[t]: the amount of inflow to the reservoir in stage t:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"@variable(model, ω_inflow[1:T])","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"For this model, our inflow is fixed, so we fix it to the data we have:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"for t in 1:T\n fix(ω_inflow[t], data[t, :inflow])\nend","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"The water balance constraint says that the water in the reservoir at the start of stage t+1 is the water in the reservoir at the start of stage t, less the amount flowed through the turbine, u_flow[t], less the amount spilled, u_spill[t], plus the amount of inflow, ω_inflow[t], into the reservoir:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"@constraint(\n model,\n [t in 1:T],\n x_storage[t+1] == x_storage[t] - u_flow[t] - u_spill[t] + ω_inflow[t],\n)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"We also need a supply = demand constraint. In practice, the units of this would be in MWh, and there would be a conversion factor between the amount of water flowing through the turbine and the power output. To simplify, we assume that power and water have the same units, so that one \"unit\" of demand is equal to one \"unit\" of the reservoir x_storage[t]:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"@constraint(model, [t in 1:T], u_flow[t] + u_thermal[t] == data[t, :demand])","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Our objective is to minimize the cost of thermal generation:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"@objective(model, Min, sum(data[t, :cost] * u_thermal[t] for t in 1:T))","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Let's optimize and check the solution","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"optimize!(model)\nsolution_summary(model)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"The total cost is:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"objective_value(model)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Here's a plot of demand and generation:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Plots.plot(data[!, :demand]; label = \"Demand\", xlabel = \"Week\")\nPlots.plot!(value.(u_thermal); label = \"Thermal\")\nPlots.plot!(value.(u_flow); label = \"Hydro\")","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"And here's the storage over time:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Plots.plot(value.(x_storage); label = \"Storage\", xlabel = \"Week\")","category":"page"},{"location":"tutorial/example_reservoir/#Deterministic-SDDP-model","page":"Example: deterministic to stochastic","title":"Deterministic SDDP model","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"For the next step, we show how to decompose our JuMP model into SDDP.jl. It should obtain the same solution.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"model = SDDP.LinearPolicyGraph(;\n stages = T,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(\n sp,\n 0 <= x_storage <= reservoir_max,\n SDDP.State,\n initial_value = reservoir_initial,\n )\n @variable(sp, 0 <= u_flow <= flow_max)\n @variable(sp, 0 <= u_thermal)\n @variable(sp, 0 <= u_spill)\n @variable(sp, ω_inflow)\n fix(ω_inflow, data[t, :inflow])\n @constraint(sp, x_storage.out == x_storage.in - u_flow - u_spill + ω_inflow)\n @constraint(sp, u_flow + u_thermal == data[t, :demand])\n @stageobjective(sp, data[t, :cost] * u_thermal)\n return\nend","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Can you see how the JuMP model maps to this syntax? We have created a SDDP.LinearPolicyGraph with T stages, we're minimizing, and we're using HiGHS.Optimizer as the optimizer.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"A few bits might be non-obvious:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"We need to provide a lower bound for the objective function. Since our costs are always positive, a valid lower bound for the total cost is 0.0.\nWe define x_storage as a state variable using SDDP.State. A state variable is any variable that flows through time, and for which we need to know the value of it in stage t-1 to compute the best action in stage t. The state variable x_storage is actually two decision variables, x_storage.in and x_storage.out, which represent x_storage[t] and x_storage[t+1] respectively.\nWe need to use @stageobjective instead of @objective.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Instead of calling JuMP.optimize!, SDDP.jl uses a train method. With our machine learning hat on, you can think of SDDP.jl as training a function for each stage that accepts the current reservoir state as input and returns the optimal actions as output. It is also an iterative algorithm, so we need to specify when it should terminate:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"SDDP.train(model; iteration_limit = 10)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"As a quick sanity check, did we get the same cost as our JuMP model?","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"SDDP.calculate_bound(model)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"That's good. Next, to check the value of the decision variables. This isn't as straight forward as our JuMP model. Instead, we need to simulate the policy, and then extract the values of the decision variables from the results of the simulation.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Since our model is deterministic, we need only 1 replication of the simulation, and we want to record the values of the x_storage, u_flow, and u_thermal variables:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"simulations = SDDP.simulate(\n model,\n 1, # Number of replications\n [:x_storage, :u_flow, :u_thermal],\n);\nnothing #hide","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"The simulations vector is too big to show. But it contains one element for each replication, and each replication contains one dictionary for each stage.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"For example, the data corresponding to the tenth stage in the first replication is:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"simulations[1][10]","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Let's grab the trace of the u_thermal and u_flow variables in the first replication, and then plot them:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"r_sim = [sim[:u_thermal] for sim in simulations[1]]\nu_sim = [sim[:u_flow] for sim in simulations[1]]\n\nPlots.plot(data[!, :demand]; label = \"Demand\", xlabel = \"Week\")\nPlots.plot!(r_sim; label = \"Thermal\")\nPlots.plot!(u_sim; label = \"Hydro\")","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Perfect. That's the same as we got before.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Now let's look at x_storage. This is a little more complicated, because we need to grab the outgoing value of the state variable in each stage:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"x_sim = [sim[:x_storage].out for sim in simulations[1]]\n\nPlots.plot(x_sim; label = \"Storage\", xlabel = \"Week\")","category":"page"},{"location":"tutorial/example_reservoir/#Stochastic-SDDP-model","page":"Example: deterministic to stochastic","title":"Stochastic SDDP model","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Now we add some randomness to our model. In each stage, we assume that the inflow could be: 2 units lower, with 30% probability; the same as before, with 40% probability; or 5 units higher, with 30% probability.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"model = SDDP.LinearPolicyGraph(;\n stages = T,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(\n sp,\n 0 <= x_storage <= reservoir_max,\n SDDP.State,\n initial_value = reservoir_initial,\n )\n @variable(sp, 0 <= u_flow <= flow_max)\n @variable(sp, 0 <= u_thermal)\n @variable(sp, 0 <= u_spill)\n @variable(sp, ω_inflow)\n # <--- This bit is new\n Ω, P = [-2, 0, 5], [0.3, 0.4, 0.3]\n SDDP.parameterize(sp, Ω, P) do ω\n fix(ω_inflow, data[t, :inflow] + ω)\n return\n end\n # --->\n @constraint(sp, x_storage.out == x_storage.in - u_flow - u_spill + ω_inflow)\n @constraint(sp, u_flow + u_thermal == data[t, :demand])\n @stageobjective(sp, data[t, :cost] * u_thermal)\n return\nend","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Can you see the differences?","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Let's train our new model. We need more iterations because of the stochasticity:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"SDDP.train(model; iteration_limit = 100)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Now simulate the policy. This time we do 100 replications because the policy is now stochastic instead of deterministic:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"simulations =\n SDDP.simulate(model, 100, [:x_storage, :u_flow, :u_thermal, :ω_inflow]);\nnothing #hide","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"And let's plot the use of thermal generation in each replication:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"plot = Plots.plot(data[!, :demand]; label = \"Demand\", xlabel = \"Week\")\nfor simulation in simulations\n Plots.plot!(plot, [sim[:u_thermal] for sim in simulation]; label = \"\")\nend\nplot","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Viewing an interpreting static plots like this is difficult, particularly as the number of simulations grows. SDDP.jl includes an interactive SpaghettiPlot that makes things easier:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"plot = SDDP.SpaghettiPlot(simulations)\nSDDP.add_spaghetti(plot; title = \"Storage\") do sim\n return sim[:x_storage].out\nend\nSDDP.add_spaghetti(plot; title = \"Hydro\") do sim\n return sim[:u_flow]\nend\nSDDP.add_spaghetti(plot; title = \"Inflow\") do sim\n return sim[:ω_inflow]\nend\nSDDP.plot(\n plot,\n \"spaghetti_plot.html\";\n # We need this to build the documentation. Set to true if running locally.\n open = false,\n)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"info: Info\nIf you have trouble viewing the plot, you can open it in a new window.","category":"page"},{"location":"tutorial/example_reservoir/#Cyclic-graphs","page":"Example: deterministic to stochastic","title":"Cyclic graphs","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"One major problem with our model is that the reservoir is empty at the end of the time horizon. This is because our model does not consider the cost of future years after the T weeks.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"We can fix this using a cyclic policy graph. One way to construct a graph is with the SDDP.UnicyclicGraph constructor:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"SDDP.UnicyclicGraph(0.7; num_nodes = 2)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"This graph has two nodes, and a loop from node 2 back to node 1 with probability 0.7.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"We can construct a cyclic policy graph as follows:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"graph = SDDP.UnicyclicGraph(0.95; num_nodes = T)\nmodel = SDDP.PolicyGraph(\n graph;\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(\n sp,\n 0 <= x_storage <= reservoir_max,\n SDDP.State,\n initial_value = reservoir_initial,\n )\n @variable(sp, 0 <= u_flow <= flow_max)\n @variable(sp, 0 <= u_thermal)\n @variable(sp, 0 <= u_spill)\n @variable(sp, ω_inflow)\n Ω, P = [-2, 0, 5], [0.3, 0.4, 0.3]\n SDDP.parameterize(sp, Ω, P) do ω\n fix(ω_inflow, data[t, :inflow] + ω)\n return\n end\n @constraint(sp, x_storage.out == x_storage.in - u_flow - u_spill + ω_inflow)\n @constraint(sp, u_flow + u_thermal == data[t, :demand])\n @stageobjective(sp, data[t, :cost] * u_thermal)\n return\nend","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Notice how the only thing that has changed is our graph; the subproblems remain the same.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Let's train a policy:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"SDDP.train(model; iteration_limit = 100)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"When we simulate now, each trajectory will be a different length, because each cycle has a 95% probability of continuing and a 5% probability of stopping.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"simulations = SDDP.simulate(model, 3);\nlength.(simulations)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"We can simulate a fixed number of cycles by passing a sampling_scheme:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"simulations = SDDP.simulate(\n model,\n 100,\n [:x_storage, :u_flow];\n sampling_scheme = SDDP.InSampleMonteCarlo(;\n max_depth = 5 * T,\n terminate_on_dummy_leaf = false,\n ),\n);\nlength.(simulations)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Let's visualize the policy:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Plots.plot(\n SDDP.publication_plot(simulations; ylabel = \"Storage\") do sim\n return sim[:x_storage].out\n end,\n SDDP.publication_plot(simulations; ylabel = \"Hydro\") do sim\n return sim[:u_flow]\n end;\n layout = (2, 1),\n)","category":"page"},{"location":"tutorial/example_reservoir/#Next-steps","page":"Example: deterministic to stochastic","title":"Next steps","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Our model is very basic. There are many aspects that we could improve:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Can you add a second reservoir to make a river chain?\nCan you modify the problem and data to use proper units, including a conversion between the volume of water flowing through the turbine and the electrical power output?","category":"page"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"CurrentModule = SDDP","category":"page"},{"location":"changelog/#Release-notes","page":"Release notes","title":"Release notes","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.","category":"page"},{"location":"changelog/#v1.9.0-(October-17,-2024)","page":"Release notes","title":"v1.9.0 (October 17, 2024)","text":"","category":"section"},{"location":"changelog/#Added","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added write_only_selected_cuts and cut_selection keyword arguments to write_cuts_to_file and read_cuts_from_file to skip potentially expensive operations (#781) (#784)\nAdded set_numerical_difficulty_callback to modify the subproblem on numerical difficulty (#790)","category":"page"},{"location":"changelog/#Fixed","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed the tests to skip threading tests if running in serial (#770)\nFixed BanditDuality to handle the case where the standard deviation is NaN (#779)\nFixed an error when lagged state variables are encountered in MSPFormat (#786)\nFixed publication_plot with replications of different lengths (#788)\nFixed CTRL+C interrupting the code at unsafe points (#789)","category":"page"},{"location":"changelog/#Other","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#771) (#772)\nUpdated printing because of changes in JuMP (#773)","category":"page"},{"location":"changelog/#v1.8.1-(August-5,-2024)","page":"Release notes","title":"v1.8.1 (August 5, 2024)","text":"","category":"section"},{"location":"changelog/#Fixed-2","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed various issues with SDDP.Threaded() (#761)\nFixed a deprecation warning for sorting a dictionary (#763)","category":"page"},{"location":"changelog/#Other-2","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated copyright notices (#762)\nUpdated .JuliaFormatter.toml (#764)","category":"page"},{"location":"changelog/#v1.8.0-(July-24,-2024)","page":"Release notes","title":"v1.8.0 (July 24, 2024)","text":"","category":"section"},{"location":"changelog/#Added-2","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.Threaded(), which is an experimental parallel scheme that supports solving problems using multiple threads. Some parts of SDDP.jl may not be thread-safe, and this can cause incorrect results, segfaults, or other errors. Please use with care and report any issues by opening a GitHub issue. (#758)","category":"page"},{"location":"changelog/#Other-3","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements and fixes (#747) (#759)","category":"page"},{"location":"changelog/#v1.7.0-(June-4,-2024)","page":"Release notes","title":"v1.7.0 (June 4, 2024)","text":"","category":"section"},{"location":"changelog/#Added-3","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added sample_backward_noise_terms_with_state for creating backward pass sampling schemes that depend on the current primal state. (#742) (Thanks @arthur-brigatto)","category":"page"},{"location":"changelog/#Fixed-3","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed error message when publication_plot has non-finite data (#738)","category":"page"},{"location":"changelog/#Other-4","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated the logo constructor (#730)","category":"page"},{"location":"changelog/#v1.6.7-(February-1,-2024)","page":"Release notes","title":"v1.6.7 (February 1, 2024)","text":"","category":"section"},{"location":"changelog/#Fixed-4","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed non-constant state dimension in the MSPFormat reader (#695)\nFixed SimulatorSamplingScheme for deterministic nodes (#710)\nFixed line search in BFGS (#711)\nFixed handling of NEARLY_FEASIBLE_POINT status (#726)","category":"page"},{"location":"changelog/#Other-5","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#692) (#694) (#706) (#716) (#727)\nUpdated to StochOptFormat v1.0 (#705)\nAdded an experimental OuterApproximation algorithm (#709)\nUpdated .gitignore (#717)\nAdded code for MDP paper (#720) (#721)\nAdded Google analytics (#723)","category":"page"},{"location":"changelog/#v1.6.6-(September-29,-2023)","page":"Release notes","title":"v1.6.6 (September 29, 2023)","text":"","category":"section"},{"location":"changelog/#Other-6","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated Example: two-stage newsvendor tutorial (#689)\nAdded a warning for people using SDDP.Statistical (#687)","category":"page"},{"location":"changelog/#v1.6.5-(September-25,-2023)","page":"Release notes","title":"v1.6.5 (September 25, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-5","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed duplicate nodes in MarkovianGraph (#681)","category":"page"},{"location":"changelog/#Other-7","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated tutorials (#677) (#678) (#682) (#683)\nFixed documentation preview (#679)","category":"page"},{"location":"changelog/#v1.6.4-(September-23,-2023)","page":"Release notes","title":"v1.6.4 (September 23, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-6","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed error for invalid log_frequency values (#665)\nFixed objective sense in deterministic_equivalent (#673)","category":"page"},{"location":"changelog/#Other-8","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation updates (#658) (#666) (#671)\nSwitch to GitHub action for deploying docs (#668) (#670)\nUpdate to Documenter@1 (#669)","category":"page"},{"location":"changelog/#v1.6.3-(September-8,-2023)","page":"Release notes","title":"v1.6.3 (September 8, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-7","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed default stopping rule with iteration_limit or time_limit set (#662)","category":"page"},{"location":"changelog/#Other-9","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Various documentation improvements (#651) (#657) (#659) (#660)","category":"page"},{"location":"changelog/#v1.6.2-(August-24,-2023)","page":"Release notes","title":"v1.6.2 (August 24, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-8","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"MSPFormat now detect and exploit stagewise independent lattices (#653)\nFixed set_optimizer for models read from file (#654)","category":"page"},{"location":"changelog/#Other-10","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed typo in pglib_opf.jl (#647)\nFixed documentation build and added color (#652)","category":"page"},{"location":"changelog/#v1.6.1-(July-20,-2023)","page":"Release notes","title":"v1.6.1 (July 20, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-9","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed bugs in MSPFormat reader (#638) (#639)","category":"page"},{"location":"changelog/#Other-11","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Clarified OutOfSampleMonteCarlo docstring (#643)","category":"page"},{"location":"changelog/#v1.6.0-(July-3,-2023)","page":"Release notes","title":"v1.6.0 (July 3, 2023)","text":"","category":"section"},{"location":"changelog/#Added-4","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added RegularizedForwardPass (#624)\nAdded FirstStageStoppingRule (#634)","category":"page"},{"location":"changelog/#Other-12","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Removed an unbound type parameter (#632)\nFixed typo in docstring (#633)\nAdded Here-and-now and hazard-decision tutorial (#635)","category":"page"},{"location":"changelog/#v1.5.1-(June-30,-2023)","page":"Release notes","title":"v1.5.1 (June 30, 2023)","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"This release contains a number of minor code changes, but it has a large impact on the content that is printed to screen. In particular, we now log periodically, instead of each iteration, and a \"good\" stopping rule is used as the default if none are specified. Try using SDDP.train(model) to see the difference.","category":"page"},{"location":"changelog/#Other-13","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed various typos in the documentation (#617)\nFixed printing test after changes in JuMP (#618)\nSet SimulationStoppingRule as the default stopping rule (#619)\nChanged the default logging frequency. Pass log_every_seconds = 0.0 to train to revert to the old behavior. (#620)\nAdded example usage with Distributions.jl (@slwu89) (#622)\nRemoved the numerical issue @warn (#627)\nImproved the quality of docstrings (#630)","category":"page"},{"location":"changelog/#v1.5.0-(May-14,-2023)","page":"Release notes","title":"v1.5.0 (May 14, 2023)","text":"","category":"section"},{"location":"changelog/#Added-5","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added the ability to use a different model for the forward pass. This is a novel feature that lets you train better policies when the model is non-convex or does not have a well-defined dual. See the Alternative forward models tutorial in which we train convex and non-convex formulations of the optimal power flow problem. (#611)","category":"page"},{"location":"changelog/#Other-14","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated missing changelog entries (#608)\nRemoved global variables (#610)\nConverted the Options struct to keyword arguments. This struct was a private implementation detail, but the change is breaking if you developed an extension to SDDP that touched these internals. (#612)\nFixed some typos (#613)","category":"page"},{"location":"changelog/#v1.4.0-(May-8,-2023)","page":"Release notes","title":"v1.4.0 (May 8, 2023)","text":"","category":"section"},{"location":"changelog/#Added-6","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.SimulationStoppingRule (#598)\nAdded sampling_scheme argument to SDDP.write_to_file (#607)","category":"page"},{"location":"changelog/#Fixed-10","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed parsing of some MSPFormat files (#602) (#604)\nFixed printing in header (#605)","category":"page"},{"location":"changelog/#v1.3.0-(May-3,-2023)","page":"Release notes","title":"v1.3.0 (May 3, 2023)","text":"","category":"section"},{"location":"changelog/#Added-7","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added experimental support for SDDP.MSPFormat.read_from_file (#593)","category":"page"},{"location":"changelog/#Other-15","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated to StochOptFormat v0.3 (#600)","category":"page"},{"location":"changelog/#v1.2.1-(May-1,-2023)","page":"Release notes","title":"v1.2.1 (May 1, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-11","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed log_every_seconds (#597)","category":"page"},{"location":"changelog/#v1.2.0-(May-1,-2023)","page":"Release notes","title":"v1.2.0 (May 1, 2023)","text":"","category":"section"},{"location":"changelog/#Added-8","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.SimulatorSamplingScheme (#594)\nAdded log_every_seconds argument to SDDP.train (#595)","category":"page"},{"location":"changelog/#Other-16","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Tweaked how the log is printed (#588)\nUpdated to StochOptFormat v0.2 (#592)","category":"page"},{"location":"changelog/#v1.1.4-(April-10,-2023)","page":"Release notes","title":"v1.1.4 (April 10, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-12","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Logs are now flushed every iteration (#584)","category":"page"},{"location":"changelog/#Other-17","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added docstrings to various functions (#581)\nMinor documentation updates (#580)\nClarified integrality documentation (#582)\nUpdated the README (#585)\nNumber of numerical issues is now printed to the log (#586)","category":"page"},{"location":"changelog/#v1.1.3-(April-2,-2023)","page":"Release notes","title":"v1.1.3 (April 2, 2023)","text":"","category":"section"},{"location":"changelog/#Other-18","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed typo in Example: deterministic to stochastic tutorial (#578)\nFixed typo in documentation of SDDP.simulate (#577)","category":"page"},{"location":"changelog/#v1.1.2-(March-18,-2023)","page":"Release notes","title":"v1.1.2 (March 18, 2023)","text":"","category":"section"},{"location":"changelog/#Other-19","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added Example: deterministic to stochastic tutorial (#572)","category":"page"},{"location":"changelog/#v1.1.1-(March-16,-2023)","page":"Release notes","title":"v1.1.1 (March 16, 2023)","text":"","category":"section"},{"location":"changelog/#Other-20","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed email in Project.toml\nAdded notebook to documentation tutorials (#571)","category":"page"},{"location":"changelog/#v1.1.0-(January-12,-2023)","page":"Release notes","title":"v1.1.0 (January 12, 2023)","text":"","category":"section"},{"location":"changelog/#Added-9","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added the node_name_parser argument to SDDP.write_cuts_to_file and added the option to skip nodes in SDDP.read_cuts_from_file (#565)","category":"page"},{"location":"changelog/#v1.0.0-(January-3,-2023)","page":"Release notes","title":"v1.0.0 (January 3, 2023)","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Although we're bumping MAJOR version, this is a non-breaking release. Going forward:","category":"page"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"New features will bump the MINOR version\nBug fixes, maintenance, and documentation updates will bump the PATCH version\nWe will support only the Long Term Support (currently v1.6.7) and the latest patch (currently v1.8.4) releases of Julia. Updates to the LTS version will bump the MINOR version\nUpdates to the compat bounds of package dependencies will bump the PATCH version.","category":"page"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"We do not intend any breaking changes to the public API, which would require a new MAJOR release. The public API is everything defined in the documentation. Anything not in the documentation is considered private and may change in any PATCH release.","category":"page"},{"location":"changelog/#Added-10","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added num_nodes argument to SDDP.UnicyclicGraph (#562)\nAdded support for passing an optimizer to SDDP.Asynchronous (#545)","category":"page"},{"location":"changelog/#Other-21","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated Plotting tools to use live plots (#563)\nAdded vale as a linter (#565)\nImproved documentation for initializing a parallel scheme (#566)","category":"page"},{"location":"changelog/#v0.4.9-(January-3,-2023)","page":"Release notes","title":"v0.4.9 (January 3, 2023)","text":"","category":"section"},{"location":"changelog/#Added-11","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.UnicyclicGraph (#556)","category":"page"},{"location":"changelog/#Other-22","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added tutorial on Markov Decision Processes (#556)\nAdded two-stage newsvendor tutorial (#557)\nRefactored the layout of the documentation (#554) (#555)\nUpdated copyright to 2023 (#558)\nFixed errors in the documentation (#561)","category":"page"},{"location":"changelog/#v0.4.8-(December-19,-2022)","page":"Release notes","title":"v0.4.8 (December 19, 2022)","text":"","category":"section"},{"location":"changelog/#Added-12","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added terminate_on_cycle option to SDDP.Historical (#549)\nAdded include_last_node option to SDDP.DefaultForwardPass (#547)","category":"page"},{"location":"changelog/#Fixed-13","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Reverted then fixed (#531) because it failed to account for problems with integer variables (#546) (#551)","category":"page"},{"location":"changelog/#v0.4.7-(December-17,-2022)","page":"Release notes","title":"v0.4.7 (December 17, 2022)","text":"","category":"section"},{"location":"changelog/#Added-13","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added initial_node support to InSampleMonteCarlo and OutOfSampleMonteCarlo (#535)","category":"page"},{"location":"changelog/#Fixed-14","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Rethrow InterruptException when solver is interrupted (#534)\nFixed numerical recovery when we need dual solutions (#531) (Thanks @bfpc)\nFixed re-using the dashboard = true option between solves (#538)\nFixed bug when no @stageobjective is set (now defaults to 0.0) (#539)\nFixed errors thrown when invalid inputs are provided to add_objective_state (#540)","category":"page"},{"location":"changelog/#Other-23","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Drop support for Julia versions prior to 1.6 (#533)\nUpdated versions of dependencies (#522) (#533)\nSwitched to HiGHS in the documentation and tests (#533)\nAdded license headers (#519)\nFixed link in air conditioning example (#521) (Thanks @conema)\nClarified variable naming in deterministic equivalent (#525) (Thanks @lucasprocessi)\nAdded this change log (#536)\nCuts are now written to model.cuts.json when numerical instability is discovered. This can aid debugging because it allows to you reload the cuts as of the iteration that caused the numerical issue (#537)","category":"page"},{"location":"changelog/#v0.4.6-(March-25,-2022)","page":"Release notes","title":"v0.4.6 (March 25, 2022)","text":"","category":"section"},{"location":"changelog/#Other-24","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated to JuMP v1.0 (#517)","category":"page"},{"location":"changelog/#v0.4.5-(March-9,-2022)","page":"Release notes","title":"v0.4.5 (March 9, 2022)","text":"","category":"section"},{"location":"changelog/#Fixed-15","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed issue with set_silent in a subproblem (#510)","category":"page"},{"location":"changelog/#Other-25","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed many typos (#500) (#501) (#506) (#511) (Thanks @bfpc)\nUpdate to JuMP v0.23 (#514)\nAdded auto-regressive tutorial (#507)","category":"page"},{"location":"changelog/#v0.4.4-(December-11,-2021)","page":"Release notes","title":"v0.4.4 (December 11, 2021)","text":"","category":"section"},{"location":"changelog/#Added-14","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added BanditDuality (#471)\nAdded benchmark scripts (#475) (#476) (#490)\nwrite_cuts_to_file now saves visited states (#468)","category":"page"},{"location":"changelog/#Fixed-16","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed BoundStalling in a deterministic policy (#470) (#474)\nFixed magnitude warning with zero coefficients (#483)","category":"page"},{"location":"changelog/#Other-26","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Improvements to LagrangianDuality (#481) (#482) (#487)\nImprovements to StrengthenedConicDuality (#486)\nSwitch to functional form for the tests (#478)\nFixed typos (#472) (Thanks @vfdev-5)\nUpdate to JuMP v0.22 (#498)","category":"page"},{"location":"changelog/#v0.4.3-(August-31,-2021)","page":"Release notes","title":"v0.4.3 (August 31, 2021)","text":"","category":"section"},{"location":"changelog/#Added-15","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added biobjective solver (#462)\nAdded forward_pass_callback (#466)","category":"page"},{"location":"changelog/#Other-27","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Update tutorials and documentation (#459) (#465)\nOrganize how paper materials are stored (#464)","category":"page"},{"location":"changelog/#v0.4.2-(August-24,-2021)","page":"Release notes","title":"v0.4.2 (August 24, 2021)","text":"","category":"section"},{"location":"changelog/#Fixed-17","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed a bug in Lagrangian duality (#457)","category":"page"},{"location":"changelog/#v0.4.1-(August-23,-2021)","page":"Release notes","title":"v0.4.1 (August 23, 2021)","text":"","category":"section"},{"location":"changelog/#Other-28","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Minor changes to our implementation of LagrangianDuality (#454) (#455)","category":"page"},{"location":"changelog/#v0.4.0-(August-17,-2021)","page":"Release notes","title":"v0.4.0 (August 17, 2021)","text":"","category":"section"},{"location":"changelog/#Breaking","page":"Release notes","title":"Breaking","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"A large refactoring for how we handle stochastic integer programs. This added support for things like SDDP.ContinuousConicDuality and SDDP.LagrangianDuality. It was breaking because we removed the integrality_handler argument to PolicyGraph. (#449) (#453)","category":"page"},{"location":"changelog/#Other-29","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#447) (#448) (#450)","category":"page"},{"location":"changelog/#v0.3.17-(July-6,-2021)","page":"Release notes","title":"v0.3.17 (July 6, 2021)","text":"","category":"section"},{"location":"changelog/#Added-16","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.PSRSamplingScheme (#426)","category":"page"},{"location":"changelog/#Other-30","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Display more model attributes (#438)\nDocumentation improvements (#433) (#437) (#439)","category":"page"},{"location":"changelog/#v0.3.16-(June-17,-2021)","page":"Release notes","title":"v0.3.16 (June 17, 2021)","text":"","category":"section"},{"location":"changelog/#Added-17","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.RiskAdjustedForwardPass (#413)\nAllow SDDP.Historical to sample sequentially (#420)","category":"page"},{"location":"changelog/#Other-31","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Update risk measure docstrings (#418)","category":"page"},{"location":"changelog/#v0.3.15-(June-1,-2021)","page":"Release notes","title":"v0.3.15 (June 1, 2021)","text":"","category":"section"},{"location":"changelog/#Added-18","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.StoppingChain","category":"page"},{"location":"changelog/#Fixed-18","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed scoping bug in SDDP.@stageobjective (#407)\nFixed a bug when the initial point is infeasible (#411)\nSet subproblems to silent by default (#409)","category":"page"},{"location":"changelog/#Other-32","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Add JuliaFormatter (#412)\nDocumentation improvements (#406) (#408)","category":"page"},{"location":"changelog/#v0.3.14-(March-30,-2021)","page":"Release notes","title":"v0.3.14 (March 30, 2021)","text":"","category":"section"},{"location":"changelog/#Fixed-19","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed O(N^2) behavior in get_same_children (#393)","category":"page"},{"location":"changelog/#v0.3.13-(March-27,-2021)","page":"Release notes","title":"v0.3.13 (March 27, 2021)","text":"","category":"section"},{"location":"changelog/#Fixed-20","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed bug in print.jl\nFixed compat of Reexport (#388)","category":"page"},{"location":"changelog/#v0.3.12-(March-22,-2021)","page":"Release notes","title":"v0.3.12 (March 22, 2021)","text":"","category":"section"},{"location":"changelog/#Added-19","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added problem statistics to header (#385) (#386)","category":"page"},{"location":"changelog/#Fixed-21","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed subtypes in visualization (#384)","category":"page"},{"location":"changelog/#v0.3.11-(March-22,-2021)","page":"Release notes","title":"v0.3.11 (March 22, 2021)","text":"","category":"section"},{"location":"changelog/#Fixed-22","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed constructor in direct mode (#383)","category":"page"},{"location":"changelog/#Other-33","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fix documentation (#379)","category":"page"},{"location":"changelog/#v0.3.10-(February-23,-2021)","page":"Release notes","title":"v0.3.10 (February 23, 2021)","text":"","category":"section"},{"location":"changelog/#Fixed-23","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed seriescolor in publication plot (#376)","category":"page"},{"location":"changelog/#v0.3.9-(February-20,-2021)","page":"Release notes","title":"v0.3.9 (February 20, 2021)","text":"","category":"section"},{"location":"changelog/#Added-20","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Add option to simulate with different incoming state (#372)\nAdded warning for cuts with high dynamic range (#373)","category":"page"},{"location":"changelog/#Fixed-24","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed seriesalpha in publication plot (#375)","category":"page"},{"location":"changelog/#v0.3.8-(January-19,-2021)","page":"Release notes","title":"v0.3.8 (January 19, 2021)","text":"","category":"section"},{"location":"changelog/#Other-34","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#367) (#369) (#370)","category":"page"},{"location":"changelog/#v0.3.7-(January-8,-2021)","page":"Release notes","title":"v0.3.7 (January 8, 2021)","text":"","category":"section"},{"location":"changelog/#Other-35","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#362) (#363) (#365) (#366)\nBump copyright (#364)","category":"page"},{"location":"changelog/#v0.3.6-(December-17,-2020)","page":"Release notes","title":"v0.3.6 (December 17, 2020)","text":"","category":"section"},{"location":"changelog/#Other-36","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fix typos (#358)\nCollapse navigation bar in docs (#359)\nUpdate TagBot.yml (#361)","category":"page"},{"location":"changelog/#v0.3.5-(November-18,-2020)","page":"Release notes","title":"v0.3.5 (November 18, 2020)","text":"","category":"section"},{"location":"changelog/#Other-37","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Update citations (#348)\nSwitch to GitHub actions (#355)","category":"page"},{"location":"changelog/#v0.3.4-(August-25,-2020)","page":"Release notes","title":"v0.3.4 (August 25, 2020)","text":"","category":"section"},{"location":"changelog/#Added-21","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added non-uniform distributionally robust risk measure (#328)\nAdded numerical recovery functions (#330)\nAdded experimental StochOptFormat (#332) (#336) (#337) (#341) (#343) (#344)\nAdded entropic risk measure (#347)","category":"page"},{"location":"changelog/#Other-38","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#327) (#333) (#339) (#340)","category":"page"},{"location":"changelog/#v0.3.3-(June-19,-2020)","page":"Release notes","title":"v0.3.3 (June 19, 2020)","text":"","category":"section"},{"location":"changelog/#Added-22","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added asynchronous support for price and belief states (#325)\nAdded ForwardPass plug-in system (#320)","category":"page"},{"location":"changelog/#Fixed-25","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fix check for probabilities in Markovian graph (#322)","category":"page"},{"location":"changelog/#v0.3.2-(April-6,-2020)","page":"Release notes","title":"v0.3.2 (April 6, 2020)","text":"","category":"section"},{"location":"changelog/#Added-23","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added log_frequency argument to SDDP.train (#307)","category":"page"},{"location":"changelog/#Other-39","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Improve error message in deterministic equivalent (#312)\nUpdate to RecipesBase 1.0 (#313)","category":"page"},{"location":"changelog/#v0.3.1-(February-26,-2020)","page":"Release notes","title":"v0.3.1 (February 26, 2020)","text":"","category":"section"},{"location":"changelog/#Fixed-26","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed filename in integrality_handlers.jl (#304)","category":"page"},{"location":"changelog/#v0.3.0-(February-20,-2020)","page":"Release notes","title":"v0.3.0 (February 20, 2020)","text":"","category":"section"},{"location":"changelog/#Breaking-2","page":"Release notes","title":"Breaking","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Breaking changes to update to JuMP v0.21 (#300).","category":"page"},{"location":"changelog/#v0.2.4-(February-7,-2020)","page":"Release notes","title":"v0.2.4 (February 7, 2020)","text":"","category":"section"},{"location":"changelog/#Added-24","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added a counter for the number of total subproblem solves (#301)","category":"page"},{"location":"changelog/#Other-40","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Update formatter (#298)\nAdded tests (#299)","category":"page"},{"location":"changelog/#v0.2.3-(January-24,-2020)","page":"Release notes","title":"v0.2.3 (January 24, 2020)","text":"","category":"section"},{"location":"changelog/#Added-25","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added support for convex risk measures (#294)","category":"page"},{"location":"changelog/#Fixed-27","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed bug when subproblem is infeasible (#296)\nFixed bug in deterministic equivalent (#297)","category":"page"},{"location":"changelog/#Other-41","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added example from IJOC paper (#293)","category":"page"},{"location":"changelog/#v0.2.2-(January-10,-2020)","page":"Release notes","title":"v0.2.2 (January 10, 2020)","text":"","category":"section"},{"location":"changelog/#Fixed-28","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed flakey time limit in tests (#291)","category":"page"},{"location":"changelog/#Other-42","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Removed MathOptFormat.jl (#289)\nUpdate copyright (#290)","category":"page"},{"location":"changelog/#v0.2.1-(December-19,-2019)","page":"Release notes","title":"v0.2.1 (December 19, 2019)","text":"","category":"section"},{"location":"changelog/#Added-26","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added support for approximating a Markov lattice (#282) (#285)\nAdd tools for visualizing the value function (#272) (#286)\nWrite .mof.json files on error (#284)","category":"page"},{"location":"changelog/#Other-43","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Improve documentation (#281) (#283)\nUpdate tests for Julia 1.3 (#287)","category":"page"},{"location":"changelog/#v0.2.0-(December-16,-2019)","page":"Release notes","title":"v0.2.0 (December 16, 2019)","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"This version added the asynchronous parallel implementation with a few minor breaking changes in how we iterated internally. It didn't break basic user-facing models, only implementations that implemented some of the extension features. It probably could have been a v1.1 release.","category":"page"},{"location":"changelog/#Added-27","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added asynchronous parallel implementation (#277)\nAdded roll-out algorithm for cyclic graphs (#279)","category":"page"},{"location":"changelog/#Other-44","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Improved error messages in PolicyGraph (#271)\nAdded JuliaFormatter (#273) (#276)\nFixed compat bounds (#274) (#278)\nAdded documentation for simulating non-standard graphs (#280)","category":"page"},{"location":"changelog/#v0.1.0-(October-17,-2019)","page":"Release notes","title":"v0.1.0 (October 17, 2019)","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"A complete rewrite of SDDP.jl based on the policy graph framework. This was essentially a new package. It has minimal code in common with the previous implementation.","category":"page"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Development started on September 28, 2018 in Kokako.jl, and the code was merged into SDDP.jl on March 14, 2019.","category":"page"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"The pull request SDDP.jl#180 lists the 29 issues that the rewrite closed.","category":"page"},{"location":"changelog/#v0.0.1-(April-18,-2018)","page":"Release notes","title":"v0.0.1 (April 18, 2018)","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Initial release. Development had been underway since January 22, 2016 in the StochDualDynamicProgram.jl repository. The last development commit there was April 5, 2017. Work then continued in this repository for a year before the first tagged release.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"EditURL = \"example_newsvendor.jl\"","category":"page"},{"location":"tutorial/example_newsvendor/#Example:-two-stage-newsvendor","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"The purpose of this tutorial is to demonstrate how to model and solve a two-stage stochastic program.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"It is based on the Two stage stochastic programs tutorial in JuMP.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"This tutorial uses the following packages","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"using JuMP\nusing SDDP\nimport Distributions\nimport ForwardDiff\nimport HiGHS\nimport Plots\nimport StatsPlots\nimport Statistics","category":"page"},{"location":"tutorial/example_newsvendor/#Background","page":"Example: two-stage newsvendor","title":"Background","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"The data for this problem is:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"D = Distributions.TriangularDist(150.0, 250.0, 200.0)\nN = 100\nd = sort!(rand(D, N));\nΩ = 1:N\nP = fill(1 / N, N);\nStatsPlots.histogram(d; bins = 20, label = \"\", xlabel = \"Demand\")","category":"page"},{"location":"tutorial/example_newsvendor/#Kelley's-cutting-plane-algorithm","page":"Example: two-stage newsvendor","title":"Kelley's cutting plane algorithm","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Kelley's cutting plane algorithm is an iterative method for maximizing concave functions. Given a concave function f(x), Kelley's constructs an outer-approximation of the function at the minimum by a set of first-order Taylor series approximations (called cuts) constructed at a set of points k = 1ldotsK:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"beginaligned\nf^K = maxlimits_theta in mathbbR x in mathbbR^N theta\n theta le f(x_k) + nabla f(x_k)^top (x - x_k)quad k=1ldotsK\n theta le M\nendaligned","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"where M is a sufficiently large number that is an upper bound for f over the domain of x.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Kelley's cutting plane algorithm is a structured way of choosing points x_k to visit, so that as more cuts are added:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"lim_K rightarrow infty f^K = maxlimits_x in mathbbR^N f(x)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"However, before we introduce the algorithm, we need to introduce some bounds.","category":"page"},{"location":"tutorial/example_newsvendor/#Bounds","page":"Example: two-stage newsvendor","title":"Bounds","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"By convexity, f(x) le f^K for all x. Thus, if x^* is a maximizer of f, then at any point in time we can construct an upper bound for f(x^*) by solving f^K.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Moreover, we can use the primal solutions x_k^* returned by solving f^k to evaluate f(x_k^*) to generate a lower bound.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Therefore, maxlimits_k=1ldotsK f(x_k^*) le f(x^*) le f^K.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"When the lower bound is sufficiently close to the upper bound, we can terminate the algorithm and declare that we have found an solution that is close to optimal.","category":"page"},{"location":"tutorial/example_newsvendor/#Implementation","page":"Example: two-stage newsvendor","title":"Implementation","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Here is pseudo-code fo the Kelley algorithm:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Take as input a convex function f(x) and a iteration limit K_max. Set K = 1, and initialize f^K-1. Set lb = -infty and ub = infty.\nSolve f^K-1 to obtain a candidate solution x_K.\nUpdate ub = f^K-1 and lb = maxlb f(x_K).\nAdd a cut theta ge f(x_K) + nabla fleft(x_Kright)^top (x - x_K) to form f^K.\nIncrement K.\nIf K K_max or ub - lb epsilon, STOP, otherwise, go to step 2.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"And here's a complete implementation:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"function kelleys_cutting_plane(\n # The function to be minimized.\n f::Function,\n # The gradient of `f`. By default, we use automatic differentiation to\n # compute the gradient of f so the user doesn't have to!\n ∇f::Function = x -> ForwardDiff.gradient(f, x);\n # The number of arguments to `f`.\n input_dimension::Int,\n # An upper bound for the function `f` over its domain.\n upper_bound::Float64,\n # The number of iterations to run Kelley's algorithm for before stopping.\n iteration_limit::Int,\n # The absolute tolerance ϵ to use for convergence.\n tolerance::Float64 = 1e-6,\n)\n # Step (1):\n K = 1\n model = JuMP.Model(HiGHS.Optimizer)\n JuMP.set_silent(model)\n JuMP.@variable(model, θ <= upper_bound)\n JuMP.@variable(model, x[1:input_dimension])\n JuMP.@objective(model, Max, θ)\n x_k = fill(NaN, input_dimension)\n lower_bound, upper_bound = -Inf, Inf\n while true\n # Step (2):\n JuMP.optimize!(model)\n x_k .= JuMP.value.(x)\n # Step (3):\n upper_bound = JuMP.objective_value(model)\n lower_bound = min(upper_bound, f(x_k))\n println(\"K = $K : $(lower_bound) <= f(x*) <= $(upper_bound)\")\n # Step (4):\n JuMP.@constraint(model, θ <= f(x_k) + ∇f(x_k)' * (x .- x_k))\n # Step (5):\n K = K + 1\n # Step (6):\n if K > iteration_limit\n println(\"-- Termination status: iteration limit --\")\n break\n elseif abs(upper_bound - lower_bound) < tolerance\n println(\"-- Termination status: converged --\")\n break\n end\n end\n println(\"Found solution: x_K = \", x_k)\n return\nend","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Let's run our algorithm to see what happens:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"kelleys_cutting_plane(;\n input_dimension = 2,\n upper_bound = 10.0,\n iteration_limit = 20,\n) do x\n return -(x[1] - 1)^2 + -(x[2] + 2)^2 + 1.0\nend","category":"page"},{"location":"tutorial/example_newsvendor/#L-Shaped-theory","page":"Example: two-stage newsvendor","title":"L-Shaped theory","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"The L-Shaped method is a way of solving two-stage stochastic programs by Benders' decomposition. It takes the problem:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"beginaligned\nV = maxlimits_xy_omega -2x + mathbbE_omega5y_omega - 01(x - y_omega) \n y_omega le x quad forall omega in Omega \n 0 le y_omega le d_omega quad forall omega in Omega \n x ge 0\nendaligned","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"and decomposes it into a second-stage problem:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"beginaligned\nV_2(barx d_omega) = maxlimits_xx^primey_omega 5y_omega - x^prime \n y_omega le x \n x^prime = x - y_omega \n 0 le y_omega le d_omega \n x = barx lambda\nendaligned","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"and a first-stage problem:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"beginaligned\nV = maxlimits_xtheta -2x + theta \n theta le mathbbE_omegaV_2(x omega) \n x ge 0\nendaligned","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Then, because V_2 is convex with respect to barx for fixed omega, we can use a set of feasible points x^k construct an outer approximation:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"beginaligned\nV^K = maxlimits_xtheta -2x + theta \n theta le mathbbE_omegaV_2(x^k omega) + nabla V_2(x^k omega)^top(x - x^k) quad k = 1ldotsK\n x ge 0 \n theta le M\nendaligned","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"where M is an upper bound on possible values of V_2 so that the problem has a bounded solution.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"It is also useful to see that because barx appears only on the right-hand side of a linear program, nabla V_2(x^k omega) = lambda^k.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Ignoring how we choose x^k for now, we can construct a lower and upper bound on the optimal solution:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"-2x^K + mathbbE_omegaV_2(x^K omega) = underbarV le V le overlineV = V^K","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Thus, we need some way of cleverly choosing a sequence of x^k so that the lower bound converges to the upper bound.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Start with K=1\nSolve V^K-1 to get x^K\nSet overlineV = V^k\nSolve V_2(x^K omega) for all omega and store the optimal objective value and dual solution lambda^K\nSet underbarV = -2x^K + mathbbE_omegaV_2(x^k omega)\nIf underbarV approx overlineV, STOP\nAdd new constraint theta le mathbbE_omegaV_2(x^K omega) +lambda^K (x - x^K)\nIncrement K, GOTO 2","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"The next section implements this algorithm in Julia.","category":"page"},{"location":"tutorial/example_newsvendor/#L-Shaped-implementation","page":"Example: two-stage newsvendor","title":"L-Shaped implementation","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Here's a function to compute the second-stage problem;","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"function solve_second_stage(x̅, d_ω)\n model = Model(HiGHS.Optimizer)\n set_silent(model)\n @variable(model, x_in)\n @variable(model, x_out >= 0)\n fix(x_in, x̅)\n @variable(model, 0 <= u_sell <= d_ω)\n @constraint(model, x_out == x_in - u_sell)\n @constraint(model, u_sell <= x_in)\n @objective(model, Max, 5 * u_sell - 0.1 * x_out)\n optimize!(model)\n return (\n V = objective_value(model),\n λ = reduced_cost(x_in),\n x = value(x_out),\n u = value(u_sell),\n )\nend\n\nsolve_second_stage(200, 170)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Here's the first-stage subproblem:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"model = Model(HiGHS.Optimizer)\nset_silent(model)\n@variable(model, x_in == 0)\n@variable(model, x_out >= 0)\n@variable(model, u_make >= 0)\n@constraint(model, x_out == x_in + u_make)\nM = 5 * maximum(d)\n@variable(model, θ <= M)\n@objective(model, Max, -2 * u_make + θ)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Importantly, to ensure we have a bounded solution, we need to add an upper bound to the variable θ.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"kIterationLimit = 100\nfor k in 1:kIterationLimit\n println(\"Solving iteration k = $k\")\n # Step 2\n optimize!(model)\n xᵏ = value(x_out)\n println(\" xᵏ = $xᵏ\")\n # Step 3\n ub = objective_value(model)\n println(\" V̅ = $ub\")\n # Step 4\n ret = [solve_second_stage(xᵏ, d[ω]) for ω in Ω]\n # Step 5\n lb = value(-2 * u_make) + sum(p * r.V for (p, r) in zip(P, ret))\n println(\" V̲ = $lb\")\n # Step 6\n if ub - lb < 1e-6\n println(\"Terminating with near-optimal solution\")\n break\n end\n # Step 7\n c = @constraint(\n model,\n θ <= sum(p * (r.V + r.λ * (x_out - xᵏ)) for (p, r) in zip(P, ret)),\n )\n println(\" Added cut: $c\")\nend","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"To get the first-stage solution, we do:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"optimize!(model)\nxᵏ = value(x_out)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"To compute a second-stage solution, we do:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"solve_second_stage(xᵏ, 170.0)","category":"page"},{"location":"tutorial/example_newsvendor/#Policy-Graph","page":"Example: two-stage newsvendor","title":"Policy Graph","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Now let's see how we can formulate and train a policy for the two-stage newsvendor problem using SDDP.jl. Under the hood, SDDP.jl implements the exact algorithm that we just wrote by hand.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"model = SDDP.LinearPolicyGraph(;\n stages = 2,\n sense = :Max,\n upper_bound = 5 * maximum(d), # The `M` in θ <= M\n optimizer = HiGHS.Optimizer,\n) do subproblem::JuMP.Model, stage::Int\n @variable(subproblem, x >= 0, SDDP.State, initial_value = 0)\n if stage == 1\n @variable(subproblem, u_make >= 0)\n @constraint(subproblem, x.out == x.in + u_make)\n @stageobjective(subproblem, -2 * u_make)\n else\n @variable(subproblem, u_sell >= 0)\n @constraint(subproblem, u_sell <= x.in)\n @constraint(subproblem, x.out == x.in - u_sell)\n SDDP.parameterize(subproblem, d, P) do ω\n set_upper_bound(u_sell, ω)\n return\n end\n @stageobjective(subproblem, 5 * u_sell - 0.1 * x.out)\n end\n return\nend\n\nSDDP.train(model; log_every_iteration = true)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"One way to query the optimal policy is with SDDP.DecisionRule:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"first_stage_rule = SDDP.DecisionRule(model; node = 1)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"solution_1 = SDDP.evaluate(first_stage_rule; incoming_state = Dict(:x => 0.0))","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Here's the second stage:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"second_stage_rule = SDDP.DecisionRule(model; node = 2)\nsolution = SDDP.evaluate(\n second_stage_rule;\n incoming_state = Dict(:x => solution_1.outgoing_state[:x]),\n noise = 170.0, # A value of d[ω], can be out-of-sample.\n controls_to_record = [:u_sell],\n)","category":"page"},{"location":"tutorial/example_newsvendor/#Simulation","page":"Example: two-stage newsvendor","title":"Simulation","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Querying the decision rules is tedious. It's often more useful to simulate the policy:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"simulations = SDDP.simulate(\n model,\n 10, #= number of replications =#\n [:x, :u_sell, :u_make]; #= variables to record =#\n skip_undefined_variables = true,\n);\nnothing #hide","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"simulations is a vector with 10 elements","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"length(simulations)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"and each element is a vector with two elements (one for each stage)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"length(simulations[1])","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"The first stage contains:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"simulations[1][1]","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"The second stage contains:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"simulations[1][2]","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"We can compute aggregated statistics across the simulations:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"objectives = map(simulations) do simulation\n return sum(data[:stage_objective] for data in simulation)\nend\nμ, t = SDDP.confidence_interval(objectives)\nprintln(\"Simulation ci : $μ ± $t\")","category":"page"},{"location":"tutorial/example_newsvendor/#Risk-aversion-revisited","page":"Example: two-stage newsvendor","title":"Risk aversion revisited","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"SDDP.jl contains a number of risk measures. One example is:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"0.5 * SDDP.Expectation() + 0.5 * SDDP.WorstCase()","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"You can construct a risk-averse policy by passing a risk measure to the risk_measure keyword argument of SDDP.train.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"We can explore how the optimal decision changes with risk by creating a function:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"function solve_newsvendor(risk_measure::SDDP.AbstractRiskMeasure)\n model = SDDP.LinearPolicyGraph(;\n stages = 2,\n sense = :Max,\n upper_bound = 5 * maximum(d),\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n @variable(subproblem, x >= 0, SDDP.State, initial_value = 0)\n if node == 1\n @stageobjective(subproblem, -2 * x.out)\n else\n @variable(subproblem, u_sell >= 0)\n @constraint(subproblem, u_sell <= x.in)\n @constraint(subproblem, x.out == x.in - u_sell)\n SDDP.parameterize(subproblem, d, P) do ω\n set_upper_bound(u_sell, ω)\n return\n end\n @stageobjective(subproblem, 5 * u_sell - 0.1 * x.out)\n end\n return\n end\n SDDP.train(model; risk_measure = risk_measure, print_level = 0)\n first_stage_rule = SDDP.DecisionRule(model; node = 1)\n solution = SDDP.evaluate(first_stage_rule; incoming_state = Dict(:x => 0.0))\n return solution.outgoing_state[:x]\nend","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Now we can see how many units a decision maker would order using CVaR:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"solve_newsvendor(SDDP.CVaR(0.4))","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"as well as a decision-maker who cares only about the worst-case outcome:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"solve_newsvendor(SDDP.WorstCase())","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"In general, the decision-maker will be somewhere between the two extremes. The SDDP.Entropic risk measure is a risk measure that has a single parameter that lets us explore the space of policies between the two extremes. When the parameter is small, the measure acts like SDDP.Expectation, and when it is large, it acts like SDDP.WorstCase.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Here is what we get if we solve our problem multiple times for different values of the risk aversion parameter gamma:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Γ = [10^i for i in -4:0.5:1]\nbuy = [solve_newsvendor(SDDP.Entropic(γ)) for γ in Γ]\nPlots.plot(\n Γ,\n buy;\n xaxis = :log,\n xlabel = \"Risk aversion parameter γ\",\n ylabel = \"Number of pies to make\",\n legend = false,\n)","category":"page"},{"location":"tutorial/example_newsvendor/#Things-to-try","page":"Example: two-stage newsvendor","title":"Things to try","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"There are a number of things you can try next:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Experiment with different buy and sales prices\nExperiment with different distributions of demand\nExplore how the optimal policy changes if you use a different risk measure\nWhat happens if you can only buy and sell integer numbers of newspapers? Try this by adding Int to the variable definitions: @variable(subproblem, buy >= 0, Int)\nWhat happens if you use a different upper bound? Try an invalid one like -100, and a very large one like 1e12.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"EditURL = \"theory_intro.jl\"","category":"page"},{"location":"explanation/theory_intro/#Introductory-theory","page":"Introductory theory","title":"Introductory theory","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"note: Note\nThis tutorial is aimed at advanced undergraduates or early-stage graduate students. You don't need prior exposure to stochastic programming! (Indeed, it may be better if you don't, because our approach is non-standard in the literature.)This tutorial is also a living document. If parts are unclear, please open an issue so it can be improved!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"This tutorial will teach you how the stochastic dual dynamic programming algorithm works by implementing a simplified version of the algorithm.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Our implementation is very much a \"vanilla\" version of SDDP; it doesn't have (m)any fancy computational tricks (e.g., the ones included in SDDP.jl) that you need to code a performant or stable version that will work on realistic instances. However, our simplified implementation will work on arbitrary policy graphs, including those with cycles such as infinite horizon problems!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Packages","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"This tutorial uses the following packages. For clarity, we call import PackageName so that we must prefix PackageName. to all functions and structs provided by that package. Everything not prefixed is either part of base Julia, or we wrote it.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"import ForwardDiff\nimport HiGHS\nimport JuMP\nimport Statistics","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"tip: Tip\nYou can follow along by installing the above packages, and copy-pasting the code we will write into a Julia REPL. Alternatively, you can download the Julia .jl file which created this tutorial from GitHub.","category":"page"},{"location":"explanation/theory_intro/#Preliminaries:-background-theory","page":"Introductory theory","title":"Preliminaries: background theory","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Start this tutorial by reading An introduction to SDDP.jl, which introduces the necessary notation and vocabulary that we need for this tutorial.","category":"page"},{"location":"explanation/theory_intro/#Preliminaries:-Kelley's-cutting-plane-algorithm","page":"Introductory theory","title":"Preliminaries: Kelley's cutting plane algorithm","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Kelley's cutting plane algorithm is an iterative method for minimizing convex functions. Given a convex function f(x), Kelley's constructs an under-approximation of the function at the minimum by a set of first-order Taylor series approximations (called cuts) constructed at a set of points k = 1ldotsK:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"beginaligned\nf^K = minlimits_theta in mathbbR x in mathbbR^N theta\n theta ge f(x_k) + fracddxf(x_k)^top (x - x_k)quad k=1ldotsK\n theta ge M\nendaligned","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"where M is a sufficiently large negative number that is a lower bound for f over the domain of x.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Kelley's cutting plane algorithm is a structured way of choosing points x_k to visit, so that as more cuts are added:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"lim_K rightarrow infty f^K = minlimits_x in mathbbR^N f(x)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"However, before we introduce the algorithm, we need to introduce some bounds.","category":"page"},{"location":"explanation/theory_intro/#Bounds","page":"Introductory theory","title":"Bounds","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"By convexity, f^K le f(x) for all x. Thus, if x^* is a minimizer of f, then at any point in time we can construct a lower bound for f(x^*) by solving f^K.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Moreover, we can use the primal solutions x_k^* returned by solving f^k to evaluate f(x_k^*) to generate an upper bound.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Therefore, f^K le f(x^*) le minlimits_k=1ldotsK f(x_k^*).","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"When the lower bound is sufficiently close to the upper bound, we can terminate the algorithm and declare that we have found an solution that is close to optimal.","category":"page"},{"location":"explanation/theory_intro/#Implementation","page":"Introductory theory","title":"Implementation","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Here is pseudo-code fo the Kelley algorithm:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Take as input a convex function f(x) and a iteration limit K_max. Set K = 0, and initialize f^K. Set lb = -infty and ub = infty.\nSolve f^K to obtain a candidate solution x_K+1.\nUpdate lb = f^K and ub = minub f(x_K+1).\nAdd a cut theta ge f(x_K+1) + fracddxfleft(x_K+1right)^top (x - x_K+1) to form f^K+1.\nIncrement K.\nIf K = K_max or ub - lb epsilon, STOP, otherwise, go to step 2.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"And here's a complete implementation:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function kelleys_cutting_plane(\n # The function to be minimized.\n f::Function,\n # The gradient of `f`. By default, we use automatic differentiation to\n # compute the gradient of f so the user doesn't have to!\n dfdx::Function = x -> ForwardDiff.gradient(f, x);\n # The number of arguments to `f`.\n input_dimension::Int,\n # A lower bound for the function `f` over its domain.\n lower_bound::Float64,\n # The number of iterations to run Kelley's algorithm for before stopping.\n iteration_limit::Int,\n # The absolute tolerance ϵ to use for convergence.\n tolerance::Float64 = 1e-6,\n)\n # Step (1):\n K = 0\n model = JuMP.Model(HiGHS.Optimizer)\n JuMP.set_silent(model)\n JuMP.@variable(model, θ >= lower_bound)\n JuMP.@variable(model, x[1:input_dimension])\n JuMP.@objective(model, Min, θ)\n x_k = fill(NaN, input_dimension)\n lower_bound, upper_bound = -Inf, Inf\n while true\n # Step (2):\n JuMP.optimize!(model)\n x_k .= JuMP.value.(x)\n # Step (3):\n lower_bound = JuMP.objective_value(model)\n upper_bound = min(upper_bound, f(x_k))\n println(\"K = $K : $(lower_bound) <= f(x*) <= $(upper_bound)\")\n # Step (4):\n JuMP.@constraint(model, θ >= f(x_k) + dfdx(x_k)' * (x .- x_k))\n # Step (5):\n K = K + 1\n # Step (6):\n if K == iteration_limit\n println(\"-- Termination status: iteration limit --\")\n break\n elseif abs(upper_bound - lower_bound) < tolerance\n println(\"-- Termination status: converged --\")\n break\n end\n end\n println(\"Found solution: x_K = \", x_k)\n return\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Let's run our algorithm to see what happens:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"kelleys_cutting_plane(;\n input_dimension = 2,\n lower_bound = 0.0,\n iteration_limit = 20,\n) do x\n return (x[1] - 1)^2 + (x[2] + 2)^2 + 1.0\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"warning: Warning\nIt's hard to choose a valid lower bound! If you choose one too loose, the algorithm can take a long time to converge. However, if you choose one so tight that M f(x^*), then you can obtain a suboptimal solution. For a deeper discussion of the implications for SDDP.jl, see Choosing an initial bound.","category":"page"},{"location":"explanation/theory_intro/#Preliminaries:-approximating-the-cost-to-go-term","page":"Introductory theory","title":"Preliminaries: approximating the cost-to-go term","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"In the background theory section, we discussed how you could formulate an optimal policy to a multistage stochastic program using the dynamic programming recursion:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"beginaligned\nV_i(x omega) = minlimits_barx x^prime u C_i(barx u omega) + mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi)\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x\nendaligned","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"where our decision rule, pi_i(x omega), solves this optimization problem and returns a u^* corresponding to an optimal solution. Moreover, we alluded to the fact that the cost-to-go term (the nasty recursive expectation) makes this problem intractable to solve.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"However, if, excluding the cost-to-go term (i.e., the SP formulation), V_i(x omega) can be formulated as a linear program (this also works for convex programs, but the math is more involved), then we can make some progress by noticing that x only appears as a right-hand side term of the fishing constraint barx = x. Therefore, V_i(x cdot) is convex with respect to x for fixed omega. (If you have not seen this result before, try to prove it.)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"The fishing constraint barx = x has an associated dual variable. The economic interpretation of this dual variable is that it represents the change in the objective function if the right-hand side x is increased on the scale of one unit. In other words, and with a slight abuse of notation, it is the value fracddx V_i(x omega). (Because V_i is not differentiable, it is a subgradient instead of a derivative.)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"If we implement the constraint barx = x by setting the lower- and upper bounds of barx to x, then the reduced cost of the decision variable barx is the subgradient, and we do not need to explicitly add the fishing constraint as a row to the constraint matrix.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"tip: Tip\nThe subproblem can have binary and integer variables, but you'll need to use Lagrangian duality to compute a subgradient!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Stochastic dual dynamic programming converts this problem into a tractable form by applying Kelley's cutting plane algorithm to the V_j functions in the cost-to-go term:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"beginaligned\nV_i^K(x omega) = minlimits_barx x^prime u C_i(barx u omega) + theta\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x \n theta ge mathbbE_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi) + fracddx^primeV_j^k(x^prime_k varphi)^top (x^prime - x^prime_k)rightquad k=1ldotsK \n theta ge M\nendaligned","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"All we need now is a way of generating these cutting planes in an iterative manner. Before we get to that though, let's start writing some code.","category":"page"},{"location":"explanation/theory_intro/#Implementation:-modeling","page":"Introductory theory","title":"Implementation: modeling","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Let's make a start by defining the problem structure. Like SDDP.jl, we need a few things:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"A description of the structure of the policy graph: how many nodes there are, and the arcs linking the nodes together with their corresponding probabilities.\nA JuMP model for each node in the policy graph.\nA way to identify the incoming and outgoing state variables of each node.\nA description of the random variable, as well as a function that we can call that will modify the JuMP model to reflect the realization of the random variable.\nA decision variable to act as the approximated cost-to-go term.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"warning: Warning\nIn the interests of brevity, there is minimal error checking. Think about all the different ways you could break the code!","category":"page"},{"location":"explanation/theory_intro/#Structs","page":"Introductory theory","title":"Structs","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"The first struct we are going to use is a State struct that will wrap an incoming and outgoing state variable:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"struct State\n in::JuMP.VariableRef\n out::JuMP.VariableRef\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Next, we need a struct to wrap all of the uncertainty within a node:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"struct Uncertainty\n parameterize::Function\n Ω::Vector{Any}\n P::Vector{Float64}\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"parameterize is a function which takes a realization of the random variable omegainOmega and updates the subproblem accordingly. The finite discrete random variable is defined by the vectors Ω and P, so that the random variable takes the value Ω[i] with probability P[i]. As such, P should sum to 1. (We don't check this here, but we should; we do in SDDP.jl.)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Now we have two building blocks, we can declare the structure of each node:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"struct Node\n subproblem::JuMP.Model\n states::Dict{Symbol,State}\n uncertainty::Uncertainty\n cost_to_go::JuMP.VariableRef\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"subproblem is going to be the JuMP model that we build at each node.\nstates is a dictionary that maps a symbolic name of a state variable to a State object wrapping the incoming and outgoing state variables in subproblem.\nuncertainty is an Uncertainty object described above.\ncost_to_go is a JuMP variable that approximates the cost-to-go term.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Finally, we define a simplified policy graph as follows:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"struct PolicyGraph\n nodes::Vector{Node}\n arcs::Vector{Dict{Int,Float64}}\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"There is a vector of nodes, as well as a data structure for the arcs. arcs is a vector of dictionaries, where arcs[i][j] gives the probability of transitioning from node i to node j, if an arc exists.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"To simplify things, we will assume that the root node transitions to node 1 with probability 1, and there are no other incoming arcs to node 1. Notably, we can still define cyclic graphs though!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"We also define a nice show method so that we don't accidentally print a large amount of information to the screen when creating a model:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function Base.show(io::IO, model::PolicyGraph)\n println(io, \"A policy graph with $(length(model.nodes)) nodes\")\n println(io, \"Arcs:\")\n for (from, arcs) in enumerate(model.arcs)\n for (to, probability) in arcs\n println(io, \" $(from) => $(to) w.p. $(probability)\")\n end\n end\n return\nend","category":"page"},{"location":"explanation/theory_intro/#Functions","page":"Introductory theory","title":"Functions","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Now we have some basic types, let's implement some functions so that the user can create a model.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"First, we need an example of a function that the user will provide. Like SDDP.jl, this takes an empty subproblem, and a node index, in this case t::Int. You could change this function to change the model, or define a new one later in the code.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"We're going to copy the example from An introduction to SDDP.jl, with some minor adjustments for the fact we don't have many of the bells and whistles of SDDP.jl. You can probably see how some of the SDDP.jl functionality like @stageobjective and SDDP.parameterize help smooth some of the usability issues like needing to construct both the incoming and outgoing state variables, or needing to explicitly declare return states, uncertainty.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function subproblem_builder(subproblem::JuMP.Model, t::Int)\n # Define the state variables. Note how we fix the incoming state to the\n # initial state variable regardless of `t`! This isn't strictly necessary;\n # it only matters that we do it for the first node.\n JuMP.@variable(subproblem, volume_in == 200)\n JuMP.@variable(subproblem, 0 <= volume_out <= 200)\n states = Dict(:volume => State(volume_in, volume_out))\n # Define the control variables.\n JuMP.@variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n # Define the constraints\n JuMP.@constraints(\n subproblem,\n begin\n volume_out == volume_in + inflow - hydro_generation - hydro_spill\n demand_constraint, thermal_generation + hydro_generation == 150.0\n end\n )\n # Define the objective for each stage `t`. Note that we can use `t` as an\n # index for t = 1, 2, 3.\n fuel_cost = [50.0, 100.0, 150.0]\n JuMP.@objective(subproblem, Min, fuel_cost[t] * thermal_generation)\n # Finally, we define the uncertainty object. Because this is a simplified\n # implementation of SDDP, we shall politely ask the user to only modify the\n # constraints, and not the objective function! (Not that it changes the\n # algorithm, we just have to add more information to keep track of things.)\n uncertainty = Uncertainty([0.0, 50.0, 100.0], [1 / 3, 1 / 3, 1 / 3]) do ω\n return JuMP.fix(inflow, ω)\n end\n return states, uncertainty\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"The next function we need to define is the analog of SDDP.PolicyGraph. It should be pretty readable.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function PolicyGraph(\n subproblem_builder::Function;\n graph::Vector{Dict{Int,Float64}},\n lower_bound::Float64,\n optimizer,\n)\n nodes = Node[]\n for t in 1:length(graph)\n # Create a model.\n model = JuMP.Model(optimizer)\n JuMP.set_silent(model)\n # Use the provided function to build out each subproblem. The user's\n # function returns a dictionary mapping `Symbol`s to `State` objects,\n # and an `Uncertainty` object.\n states, uncertainty = subproblem_builder(model, t)\n # Now add the cost-to-go terms:\n JuMP.@variable(model, cost_to_go >= lower_bound)\n obj = JuMP.objective_function(model)\n JuMP.@objective(model, Min, obj + cost_to_go)\n # If there are no outgoing arcs, the cost-to-go is 0.0.\n if length(graph[t]) == 0\n JuMP.fix(cost_to_go, 0.0; force = true)\n end\n push!(nodes, Node(model, states, uncertainty, cost_to_go))\n end\n return PolicyGraph(nodes, graph)\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Then, we can create a model using the subproblem_builder function we defined earlier:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"model = PolicyGraph(\n subproblem_builder;\n graph = [Dict(2 => 1.0), Dict(3 => 1.0), Dict{Int,Float64}()],\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n)","category":"page"},{"location":"explanation/theory_intro/#Implementation:-helpful-samplers","page":"Introductory theory","title":"Implementation: helpful samplers","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Before we get properly coding the solution algorithm, it's also going to be useful to have a function that samples a realization of the random variable defined by Ω and P.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function sample_uncertainty(uncertainty::Uncertainty)\n r = rand()\n for (p, ω) in zip(uncertainty.P, uncertainty.Ω)\n r -= p\n if r < 0.0\n return ω\n end\n end\n return error(\"We should never get here because P should sum to 1.0.\")\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"note: Note\nrand() samples a uniform random variable in [0, 1).","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"For example:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"for i in 1:3\n println(\"ω = \", sample_uncertainty(model.nodes[1].uncertainty))\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"It's also going to be useful to define a function that generates a random walk through the nodes of the graph:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function sample_next_node(model::PolicyGraph, current::Int)\n if length(model.arcs[current]) == 0\n # No outgoing arcs!\n return nothing\n else\n r = rand()\n for (to, probability) in model.arcs[current]\n r -= probability\n if r < 0.0\n return to\n end\n end\n # We looped through the outgoing arcs and still have probability left\n # over! This means we've hit an implicit \"zero\" node.\n return nothing\n end\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"For example:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"for i in 1:3\n # We use `repr` to print the next node, because `sample_next_node` can\n # return `nothing`.\n println(\"Next node from $(i) = \", repr(sample_next_node(model, i)))\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"This is a little boring, because our graph is simple. However, more complicated graphs will generate more interesting trajectories!","category":"page"},{"location":"explanation/theory_intro/#Implementation:-the-forward-pass","page":"Introductory theory","title":"Implementation: the forward pass","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Recall that, after approximating the cost-to-go term, we need a way of generating the cuts. As the first step, we need a way of generating candidate solutions x_k^prime. However, unlike the Kelley's example, our functions V_j^k(x^prime varphi) need two inputs: an outgoing state variable and a realization of the random variable.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"One way of getting these inputs is just to pick a random (feasible) value. However, in doing so, we might pick outgoing state variables that we will never see in practice, or we might infrequently pick outgoing state variables that we will often see in practice. Therefore, a better way of generating the inputs is to use a simulation of the policy, which we call the forward pass.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"The forward pass walks the policy graph from start to end, transitioning randomly along the arcs. At each node, it observes a realization of the random variable and solves the approximated subproblem to generate a candidate outgoing state variable x_k^prime. The outgoing state variable is passed as the incoming state variable to the next node in the trajectory.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function forward_pass(model::PolicyGraph, io::IO = stdout)\n println(io, \"| Forward Pass\")\n # First, get the value of the state at the root node (e.g., x_R).\n incoming_state =\n Dict(k => JuMP.fix_value(v.in) for (k, v) in model.nodes[1].states)\n # `simulation_cost` is an accumlator that is going to sum the stage-costs\n # incurred over the forward pass.\n simulation_cost = 0.0\n # We also need to record the nodes visited and resultant outgoing state\n # variables so we can pass them to the backward pass.\n trajectory = Tuple{Int,Dict{Symbol,Float64}}[]\n # Now's the meat of the forward pass: beginning at the first node:\n t = 1\n while t !== nothing\n node = model.nodes[t]\n println(io, \"| | Visiting node $(t)\")\n # Sample the uncertainty:\n ω = sample_uncertainty(node.uncertainty)\n println(io, \"| | | ω = \", ω)\n # Parameterizing the subproblem using the user-provided function:\n node.uncertainty.parameterize(ω)\n println(io, \"| | | x = \", incoming_state)\n # Update the incoming state variable:\n for (k, v) in incoming_state\n JuMP.fix(node.states[k].in, v; force = true)\n end\n # Now solve the subproblem and check we found an optimal solution:\n JuMP.optimize!(node.subproblem)\n if JuMP.termination_status(node.subproblem) != JuMP.MOI.OPTIMAL\n error(\"Something went terribly wrong!\")\n end\n # Compute the outgoing state variables:\n outgoing_state = Dict(k => JuMP.value(v.out) for (k, v) in node.states)\n println(io, \"| | | x′ = \", outgoing_state)\n # We also need to compute the stage cost to add to our\n # `simulation_cost` accumulator:\n stage_cost =\n JuMP.objective_value(node.subproblem) - JuMP.value(node.cost_to_go)\n simulation_cost += stage_cost\n println(io, \"| | | C(x, u, ω) = \", stage_cost)\n # As a penultimate step, set the outgoing state of stage t and the\n # incoming state of stage t + 1, and add the node to the trajectory.\n incoming_state = outgoing_state\n push!(trajectory, (t, outgoing_state))\n # Finally, sample a new node to step to. If `t === nothing`, the\n # `while` loop will break.\n t = sample_next_node(model, t)\n end\n return trajectory, simulation_cost\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Let's take a look at one forward pass:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"trajectory, simulation_cost = forward_pass(model);\nnothing #hide","category":"page"},{"location":"explanation/theory_intro/#Implementation:-the-backward-pass","page":"Introductory theory","title":"Implementation: the backward pass","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"From the forward pass, we obtained a vector of nodes visited and their corresponding outgoing state variables. Now we need to refine the approximation for each node at the candidate solution for the outgoing state variable. That is, we need to add a new cut:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"theta ge mathbbE_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi) + fracddx^primeV_j^k(x^prime_k varphi)^top (x^prime - x^prime_k)right","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"or alternatively:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"theta ge sumlimits_j in i^+ sumlimits_varphi in Omega_j p_ij p_varphileftV_j^k(x^prime_k varphi) + fracddx^primeV_j^k(x^prime_k varphi)^top (x^prime - x^prime_k)right","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"It doesn't matter what order we visit the nodes to generate these cuts for. For example, we could compute them all in parallel, using the current approximations of V^K_i.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"However, we can be smarter than that.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"If we traverse the list of nodes visited in the forward pass in reverse, then we come to refine the i^th node in the trajectory, we will already have improved the approximation of the (i+1)^th node in the trajectory as well! Therefore, our refinement of the i^th node will be better than if we improved node i first, and then refined node (i+1).","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Because we walk the nodes in reverse, we call this the backward pass.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"info: Info\nIf you're into deep learning, you could view this as the equivalent of back-propagation: the forward pass pushes primal information through the graph (outgoing state variables), and the backward pass pulls dual information (cuts) back through the graph to improve our decisions on the next forward pass.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function backward_pass(\n model::PolicyGraph,\n trajectory::Vector{Tuple{Int,Dict{Symbol,Float64}}},\n io::IO = stdout,\n)\n println(io, \"| Backward pass\")\n # For the backward pass, we walk back up the nodes.\n for i in reverse(1:length(trajectory))\n index, outgoing_states = trajectory[i]\n node = model.nodes[index]\n println(io, \"| | Visiting node $(index)\")\n if length(model.arcs[index]) == 0\n # If there are no children, the cost-to-go is 0.\n println(io, \"| | | Skipping node because the cost-to-go is 0\")\n continue\n end\n # Create an empty affine expression that we will use to build up the\n # right-hand side of the cut expression.\n cut_expression = JuMP.AffExpr(0.0)\n # For each node j ∈ i⁺\n for (j, P_ij) in model.arcs[index]\n next_node = model.nodes[j]\n # Set the incoming state variables of node j to the outgoing state\n # variables of node i\n for (k, v) in outgoing_states\n JuMP.fix(next_node.states[k].in, v; force = true)\n end\n # Then for each realization of φ ∈ Ωⱼ\n for (pφ, φ) in zip(next_node.uncertainty.P, next_node.uncertainty.Ω)\n # Setup and solve for the realization of φ\n println(io, \"| | | Solving φ = \", φ)\n next_node.uncertainty.parameterize(φ)\n JuMP.optimize!(next_node.subproblem)\n # Then prepare the cut `P_ij * pφ * [V + dVdxᵀ(x - x_k)]``\n V = JuMP.objective_value(next_node.subproblem)\n println(io, \"| | | | V = \", V)\n dVdx = Dict(\n k => JuMP.reduced_cost(v.in) for (k, v) in next_node.states\n )\n println(io, \"| | | | dVdx′ = \", dVdx)\n cut_expression += JuMP.@expression(\n node.subproblem,\n P_ij *\n pφ *\n (\n V + sum(\n dVdx[k] * (x.out - outgoing_states[k]) for\n (k, x) in node.states\n )\n ),\n )\n end\n end\n # And then refine the cost-to-go variable by adding the cut:\n c = JuMP.@constraint(node.subproblem, node.cost_to_go >= cut_expression)\n println(io, \"| | | Adding cut : \", c)\n end\n return nothing\nend","category":"page"},{"location":"explanation/theory_intro/#Implementation:-bounds","page":"Introductory theory","title":"Implementation: bounds","text":"","category":"section"},{"location":"explanation/theory_intro/#Lower-bounds","page":"Introductory theory","title":"Lower bounds","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Recall from Kelley's that we can obtain a lower bound for f(x^*) be evaluating f^K. The analogous lower bound for a multistage stochastic program is:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"mathbbE_i in R^+ omega in Omega_iV_i^K(x_R omega) le min_pi mathbbE_i in R^+ omega in Omega_iV_i^pi(x_R omega)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Here's how we compute the lower bound:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function lower_bound(model::PolicyGraph)\n node = model.nodes[1]\n bound = 0.0\n for (p, ω) in zip(node.uncertainty.P, node.uncertainty.Ω)\n node.uncertainty.parameterize(ω)\n JuMP.optimize!(node.subproblem)\n bound += p * JuMP.objective_value(node.subproblem)\n end\n return bound\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"note: Note\nThe implementation is simplified because we assumed that there is only one arc from the root node, and that it pointed to the first node in the vector.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Because we haven't trained a policy yet, the lower bound is going to be very bad:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"lower_bound(model)","category":"page"},{"location":"explanation/theory_intro/#Upper-bounds","page":"Introductory theory","title":"Upper bounds","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"With Kelley's algorithm, we could easily construct an upper bound by evaluating f(x_K). However, it is almost always intractable to evaluate an upper bound for multistage stochastic programs due to the large number of nodes and the nested expectations. Instead, we can perform a Monte Carlo simulation of the policy to build a statistical estimate for the value of mathbbE_i in R^+ omega in Omega_iV_i^pi(x_R omega), where pi is the policy defined by the current approximations V^K_i.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function upper_bound(model::PolicyGraph; replications::Int)\n # Pipe the output to `devnull` so we don't print too much!\n simulations = [forward_pass(model, devnull) for i in 1:replications]\n z = [s[2] for s in simulations]\n μ = Statistics.mean(z)\n tσ = 1.96 * Statistics.std(z) / sqrt(replications)\n return μ, tσ\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"note: Note\nThe width of the confidence interval is incorrect if there are cycles in the graph, because the distribution of simulation costs z is not symmetric. The mean is correct, however.","category":"page"},{"location":"explanation/theory_intro/#Termination-criteria","page":"Introductory theory","title":"Termination criteria","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"In Kelley's algorithm, the upper bound was deterministic. Therefore, we could terminate the algorithm when the lower bound was sufficiently close to the upper bound. However, our upper bound for SDDP is not deterministic; it is a confidence interval!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Some people suggest terminating SDDP when the lower bound is contained within the confidence interval. However, this is a poor choice because it is too easy to generate a false positive. For example, if we use a small number of replications then the width of the confidence will be large, and we are more likely to terminate!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"In a future tutorial (not yet written...) we will discuss termination criteria in more depth. For now, pick a large number of iterations and train for as long as possible.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"tip: Tip\nFor a rule of thumb, pick a large number of iterations to train the policy for (e.g., 10 times mathcalN times maxlimits_iinmathcalN Omega_i)","category":"page"},{"location":"explanation/theory_intro/#Implementation:-the-training-loop","page":"Introductory theory","title":"Implementation: the training loop","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"The train loop of SDDP just applies the forward and backward passes iteratively, followed by a final simulation to compute the upper bound confidence interval:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function train(\n model::PolicyGraph;\n iteration_limit::Int,\n replications::Int,\n io::IO = stdout,\n)\n for i in 1:iteration_limit\n println(io, \"Starting iteration $(i)\")\n outgoing_states, _ = forward_pass(model, io)\n backward_pass(model, outgoing_states, io)\n println(io, \"| Finished iteration\")\n println(io, \"| | lower_bound = \", lower_bound(model))\n end\n println(io, \"Termination status: iteration limit\")\n μ, tσ = upper_bound(model; replications = replications)\n println(io, \"Upper bound = $(μ) ± $(tσ)\")\n return\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Using our model we defined earlier, we can go:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"train(model; iteration_limit = 3, replications = 100)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Success! We trained a policy for a finite horizon multistage stochastic program using stochastic dual dynamic programming.","category":"page"},{"location":"explanation/theory_intro/#Implementation:-evaluating-the-policy","page":"Introductory theory","title":"Implementation: evaluating the policy","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"A final step is the ability to evaluate the policy at a given point.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function evaluate_policy(\n model::PolicyGraph;\n node::Int,\n incoming_state::Dict{Symbol,Float64},\n random_variable,\n)\n the_node = model.nodes[node]\n the_node.uncertainty.parameterize(random_variable)\n for (k, v) in incoming_state\n JuMP.fix(the_node.states[k].in, v; force = true)\n end\n JuMP.optimize!(the_node.subproblem)\n return Dict(\n k => JuMP.value.(v) for\n (k, v) in JuMP.object_dictionary(the_node.subproblem)\n )\nend\n\nevaluate_policy(\n model;\n node = 1,\n incoming_state = Dict(:volume => 150.0),\n random_variable = 75,\n)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"note: Note\nThe random variable can be out-of-sample, i.e., it doesn't have to be in the vector Omega we created when defining the model! This is a notable difference to other multistage stochastic solution methods like progressive hedging or using the deterministic equivalent.","category":"page"},{"location":"explanation/theory_intro/#Example:-infinite-horizon","page":"Introductory theory","title":"Example: infinite horizon","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"As promised earlier, our implementation is actually pretty general. It can solve any multistage stochastic (linear) program defined by a policy graph, including infinite horizon problems!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Here's an example, where we have extended our earlier problem with an arc from node 3 to node 2 with probability 0.5. You can interpret the 0.5 as a discount factor.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"model = PolicyGraph(\n subproblem_builder;\n graph = [Dict(2 => 1.0), Dict(3 => 1.0), Dict(2 => 0.5)],\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Then, train a policy:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"train(model; iteration_limit = 3, replications = 100)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Success! We trained a policy for an infinite horizon multistage stochastic program using stochastic dual dynamic programming. Note how some of the forward passes are different lengths!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"evaluate_policy(\n model;\n node = 3,\n incoming_state = Dict(:volume => 100.0),\n random_variable = 10.0,\n)","category":"page"},{"location":"examples/generation_expansion/","page":"Generation expansion","title":"Generation expansion","text":"EditURL = \"generation_expansion.jl\"","category":"page"},{"location":"examples/generation_expansion/#Generation-expansion","page":"Generation expansion","title":"Generation expansion","text":"","category":"section"},{"location":"examples/generation_expansion/","page":"Generation expansion","title":"Generation expansion","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/generation_expansion/","page":"Generation expansion","title":"Generation expansion","text":"using SDDP\nimport HiGHS\nimport Test\n\nfunction generation_expansion(duality_handler)\n build_cost = 1e4\n use_cost = 4\n num_units = 5\n capacities = ones(num_units)\n demand_vals =\n 0.5 * [\n 5 5 5 5 5 5 5 5\n 4 3 1 3 0 9 8 17\n 0 9 4 2 19 19 13 7\n 25 11 4 14 4 6 15 12\n 6 7 5 3 8 4 17 13\n ]\n # Cost of unmet demand\n penalty = 5e5\n # Discounting rate\n rho = 0.99\n model = SDDP.LinearPolicyGraph(;\n stages = 5,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, stage\n @variable(\n sp,\n 0 <= invested[1:num_units] <= 1,\n SDDP.State,\n Int,\n initial_value = 0\n )\n @variables(sp, begin\n generation >= 0\n unmet >= 0\n demand\n end)\n\n @constraints(\n sp,\n begin\n # Can't un-invest\n investment[i in 1:num_units], invested[i].out >= invested[i].in\n # Generation capacity\n sum(capacities[i] * invested[i].out for i in 1:num_units) >=\n generation\n # Meet demand or pay a penalty\n unmet >= demand - sum(generation)\n # For fewer iterations order the units to break symmetry, units are identical (tougher numerically)\n [j in 1:(num_units-1)], invested[j].out <= invested[j+1].out\n end\n )\n # Demand is uncertain\n SDDP.parameterize(ω -> JuMP.fix(demand, ω), sp, demand_vals[stage, :])\n\n @expression(\n sp,\n investment_cost,\n build_cost *\n sum(invested[i].out - invested[i].in for i in 1:num_units)\n )\n @stageobjective(\n sp,\n (investment_cost + generation * use_cost) * rho^(stage - 1) +\n penalty * unmet\n )\n end\n if get(ARGS, 1, \"\") == \"--write\"\n # Run `$ julia generation_expansion.jl --write` to update the benchmark\n # model directory\n model_dir = joinpath(@__DIR__, \"..\", \"..\", \"..\", \"benchmarks\", \"models\")\n SDDP.write_to_file(\n model,\n joinpath(model_dir, \"generation_expansion.sof.json.gz\");\n test_scenarios = 100,\n )\n exit(0)\n end\n SDDP.train(model; log_frequency = 10, duality_handler = duality_handler)\n Test.@test SDDP.calculate_bound(model) ≈ 2.078860e6 atol = 1e3\n return\nend\n\ngeneration_expansion(SDDP.ContinuousConicDuality())\ngeneration_expansion(SDDP.LagrangianDuality())","category":"page"},{"location":"examples/biobjective_hydro/","page":"Biobjective hydro-thermal","title":"Biobjective hydro-thermal","text":"EditURL = \"biobjective_hydro.jl\"","category":"page"},{"location":"examples/biobjective_hydro/#Biobjective-hydro-thermal","page":"Biobjective hydro-thermal","title":"Biobjective hydro-thermal","text":"","category":"section"},{"location":"examples/biobjective_hydro/","page":"Biobjective hydro-thermal","title":"Biobjective hydro-thermal","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/biobjective_hydro/","page":"Biobjective hydro-thermal","title":"Biobjective hydro-thermal","text":"using SDDP, HiGHS, Statistics, Test\n\nfunction biobjective_example()\n model = SDDP.LinearPolicyGraph(;\n stages = 3,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, _\n @variable(subproblem, 0 <= v <= 200, SDDP.State, initial_value = 50)\n @variables(subproblem, begin\n 0 <= g[i = 1:2] <= 100\n 0 <= u <= 150\n s >= 0\n shortage_cost >= 0\n end)\n @expressions(subproblem, begin\n objective_1, g[1] + 10 * g[2]\n objective_2, shortage_cost\n end)\n @constraints(subproblem, begin\n inflow_constraint, v.out == v.in - u - s\n g[1] + g[2] + u == 150\n shortage_cost >= 40 - v.out\n shortage_cost >= 60 - 2 * v.out\n shortage_cost >= 80 - 4 * v.out\n end)\n # You must call this for a biobjective problem!\n SDDP.initialize_biobjective_subproblem(subproblem)\n SDDP.parameterize(subproblem, 0.0:5:50.0) do ω\n JuMP.set_normalized_rhs(inflow_constraint, ω)\n # You must call `set_biobjective_functions` from within\n # `SDDP.parameterize`.\n return SDDP.set_biobjective_functions(\n subproblem,\n objective_1,\n objective_2,\n )\n end\n end\n pareto_weights =\n SDDP.train_biobjective(model; solution_limit = 10, iteration_limit = 10)\n solutions = [(k, v) for (k, v) in pareto_weights]\n sort!(solutions; by = x -> x[1])\n @test length(solutions) == 10\n # Test for convexity! The gradient must be decreasing as we move from left\n # to right.\n gradient(a, b) = (b[2] - a[2]) / (b[1] - a[1])\n grad = Inf\n for i in 1:9\n new_grad = gradient(solutions[i], solutions[i+1])\n @test new_grad < grad\n grad = new_grad\n end\n return\nend\n\nbiobjective_example()","category":"page"},{"location":"examples/asset_management_simple/","page":"Asset management","title":"Asset management","text":"EditURL = \"asset_management_simple.jl\"","category":"page"},{"location":"examples/asset_management_simple/#Asset-management","page":"Asset management","title":"Asset management","text":"","category":"section"},{"location":"examples/asset_management_simple/","page":"Asset management","title":"Asset management","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/asset_management_simple/","page":"Asset management","title":"Asset management","text":"Taken from the book J.R. Birge, F. Louveaux, Introduction to Stochastic Programming, Springer Series in Operations Research and Financial Engineering, Springer New York, New York, NY, 2011","category":"page"},{"location":"examples/asset_management_simple/","page":"Asset management","title":"Asset management","text":"using SDDP, HiGHS, Test\n\nfunction asset_management_simple()\n model = SDDP.PolicyGraph(\n SDDP.MarkovianGraph(\n Array{Float64,2}[\n [1.0]',\n [0.5 0.5],\n [0.5 0.5; 0.5 0.5],\n [0.5 0.5; 0.5 0.5],\n ],\n );\n lower_bound = -1_000.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, index\n (stage, markov_state) = index\n r_stock = [1.25, 1.06]\n r_bonds = [1.14, 1.12]\n @variable(subproblem, stocks >= 0, SDDP.State, initial_value = 0.0)\n @variable(subproblem, bonds >= 0, SDDP.State, initial_value = 0.0)\n if stage == 1\n @constraint(subproblem, stocks.out + bonds.out == 55)\n @stageobjective(subproblem, 0)\n elseif 1 < stage < 4\n @constraint(\n subproblem,\n r_stock[markov_state] * stocks.in +\n r_bonds[markov_state] * bonds.in == stocks.out + bonds.out\n )\n @stageobjective(subproblem, 0)\n else\n @variable(subproblem, over >= 0)\n @variable(subproblem, short >= 0)\n @constraint(\n subproblem,\n r_stock[markov_state] * stocks.in +\n r_bonds[markov_state] * bonds.in - over + short == 80\n )\n @stageobjective(subproblem, -over + 4 * short)\n end\n end\n SDDP.train(model; log_frequency = 5)\n @test SDDP.calculate_bound(model) ≈ 1.514 atol = 1e-4\n return\nend\n\nasset_management_simple()","category":"page"},{"location":"guides/access_previous_variables/#Access-variables-from-a-previous-stage","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"","category":"section"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"A common question is \"how do I use a variable from a previous stage in a constraint?\"","category":"page"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"info: Info\nIf you want to use a variable from a previous stage, it must be a state variable.","category":"page"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"Here are some examples:","category":"page"},{"location":"guides/access_previous_variables/#Access-a-first-stage-decision-in-a-future-stage","page":"Access variables from a previous stage","title":"Access a first-stage decision in a future stage","text":"","category":"section"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"This is often useful if your first-stage decisions are capacity-expansion type decisions (e.g., you choose first how much capacity to add, but because it takes time to build, it only shows up in some future stage).","category":"page"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"using SDDP, HiGHS\nSDDP.LinearPolicyGraph(\n stages = 10,\n sense = :Max,\n upper_bound = 100.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n # Capacity of the generator. Decided in the first stage.\n @variable(sp, capacity >= 0, SDDP.State, initial_value = 0)\n # Quantity of water stored.\n @variable(sp, reservoir >= 0, SDDP.State, initial_value = 0)\n # Quantity of water to use for electricity generation in current stage.\n @variable(sp, generation >= 0)\n if t == 1\n # There are no constraints in the first stage, but we need to push the\n # initial value of the reservoir to the next stage.\n @constraint(sp, reservoir.out == reservoir.in)\n # Since we're maximizing profit, subtract cost of capacity.\n @stageobjective(sp, -capacity.out)\n else\n # Water balance constraint.\n @constraint(sp, balance, reservoir.out - reservoir.in + generation == 0)\n # Generation limit.\n @constraint(sp, generation <= capacity.in)\n # Push capacity to the next stage.\n @constraint(sp, capacity.out == capacity.in)\n # Maximize generation.\n @stageobjective(sp, generation)\n # Random inflow in balance constraint.\n SDDP.parameterize(sp, rand(4)) do w\n set_normalized_rhs(balance, w)\n end\n end\nend","category":"page"},{"location":"guides/access_previous_variables/#Access-a-decision-from-N-stages-ago","page":"Access variables from a previous stage","title":"Access a decision from N stages ago","text":"","category":"section"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"This is often useful if have some inventory problem with a lead-time on orders.","category":"page"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"using SDDP, HiGHS\nSDDP.LinearPolicyGraph(\n stages = 10,\n sense = :Max,\n upper_bound = 100,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n # Current inventory on hand.\n @variable(sp, inventory >= 0, SDDP.State, initial_value = 0)\n # Inventory pipeline.\n # pipeline[1].out are orders placed today.\n # pipeline[5].in are orders that arrive today and can be added to the\n # current inventory.\n # Stock moves up one slot in the pipeline each stage.\n @variable(sp, pipeline[1:5], SDDP.State, initial_value = 0)\n # The number of units to order today.\n @variable(sp, 0 <= buy <= 10)\n # The number of units to sell today.\n @variable(sp, sell >= 0)\n # Buy orders get placed in the pipeline.\n @constraint(sp, pipeline[1].out == buy)\n # Stock moves up one slot in the pipeline each stage.\n @constraint(sp, [i=2:5], pipeline[i].out == pipeline[i-1].in)\n # Stock balance constraint.\n @constraint(sp, inventory.out == inventory.in - sell + pipeline[5].in)\n # Maximize quantity of sold items.\n @stageobjective(sp, sell)\nend","category":"page"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"warning: Warning\nYou must initialize the same number of state variables in every stage, even if they are not used in that stage.","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"DocTestSetup = quote\n using SDDP\nend","category":"page"},{"location":"guides/create_a_belief_state/#Create-a-belief-state","page":"Create a belief state","title":"Create a belief state","text":"","category":"section"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"SDDP.jl includes an implementation of the algorithm described in Dowson, O., Morton, D.P., & Pagnoncelli, B.K. (2020). Partially observable multistage stochastic optimization. Operations Research Letters, 48(4), 505–512.","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"Given a SDDP.Graph object (see Create a general policy graph for details), we can define the ambiguity partition using SDDP.add_ambiguity_set.","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"For example, first we create a Markovian graph:","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"using SDDP\nG = SDDP.MarkovianGraph([[0.5 0.5], [0.2 0.8; 0.8 0.2]])","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"Then we add an ambiguity set over the nodes in the each stage:","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"for t in 1:2\n SDDP.add_ambiguity_set(G, [(t, 1), (t, 2)])\nend","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"This results in the graph:","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"G","category":"page"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"CurrentModule = SDDP","category":"page"},{"location":"release_notes/#Release-notes","page":"Release notes","title":"Release notes","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.","category":"page"},{"location":"release_notes/#[v1.9.0](https://github.com/odow/SDDP.jl/releases/tag/v1.9.0)-(October-17,-2024)","page":"Release notes","title":"v1.9.0 (October 17, 2024)","text":"","category":"section"},{"location":"release_notes/#Added","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added write_only_selected_cuts and cut_selection keyword arguments to write_cuts_to_file and read_cuts_from_file to skip potentially expensive operations (#781) (#784)\nAdded set_numerical_difficulty_callback to modify the subproblem on numerical difficulty (#790)","category":"page"},{"location":"release_notes/#Fixed","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed the tests to skip threading tests if running in serial (#770)\nFixed BanditDuality to handle the case where the standard deviation is NaN (#779)\nFixed an error when lagged state variables are encountered in MSPFormat (#786)\nFixed publication_plot with replications of different lengths (#788)\nFixed CTRL+C interrupting the code at unsafe points (#789)","category":"page"},{"location":"release_notes/#Other","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#771) (#772)\nUpdated printing because of changes in JuMP (#773)","category":"page"},{"location":"release_notes/#[v1.8.1](https://github.com/odow/SDDP.jl/releases/tag/v1.8.1)-(August-5,-2024)","page":"Release notes","title":"v1.8.1 (August 5, 2024)","text":"","category":"section"},{"location":"release_notes/#Fixed-2","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed various issues with SDDP.Threaded() (#761)\nFixed a deprecation warning for sorting a dictionary (#763)","category":"page"},{"location":"release_notes/#Other-2","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated copyright notices (#762)\nUpdated .JuliaFormatter.toml (#764)","category":"page"},{"location":"release_notes/#[v1.8.0](https://github.com/odow/SDDP.jl/releases/tag/v1.8.0)-(July-24,-2024)","page":"Release notes","title":"v1.8.0 (July 24, 2024)","text":"","category":"section"},{"location":"release_notes/#Added-2","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.Threaded(), which is an experimental parallel scheme that supports solving problems using multiple threads. Some parts of SDDP.jl may not be thread-safe, and this can cause incorrect results, segfaults, or other errors. Please use with care and report any issues by opening a GitHub issue. (#758)","category":"page"},{"location":"release_notes/#Other-3","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements and fixes (#747) (#759)","category":"page"},{"location":"release_notes/#[v1.7.0](https://github.com/odow/SDDP.jl/releases/tag/v1.7.0)-(June-4,-2024)","page":"Release notes","title":"v1.7.0 (June 4, 2024)","text":"","category":"section"},{"location":"release_notes/#Added-3","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added sample_backward_noise_terms_with_state for creating backward pass sampling schemes that depend on the current primal state. (#742) (Thanks @arthur-brigatto)","category":"page"},{"location":"release_notes/#Fixed-3","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed error message when publication_plot has non-finite data (#738)","category":"page"},{"location":"release_notes/#Other-4","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated the logo constructor (#730)","category":"page"},{"location":"release_notes/#[v1.6.7](https://github.com/odow/SDDP.jl/releases/tag/v1.6.7)-(February-1,-2024)","page":"Release notes","title":"v1.6.7 (February 1, 2024)","text":"","category":"section"},{"location":"release_notes/#Fixed-4","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed non-constant state dimension in the MSPFormat reader (#695)\nFixed SimulatorSamplingScheme for deterministic nodes (#710)\nFixed line search in BFGS (#711)\nFixed handling of NEARLY_FEASIBLE_POINT status (#726)","category":"page"},{"location":"release_notes/#Other-5","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#692) (#694) (#706) (#716) (#727)\nUpdated to StochOptFormat v1.0 (#705)\nAdded an experimental OuterApproximation algorithm (#709)\nUpdated .gitignore (#717)\nAdded code for MDP paper (#720) (#721)\nAdded Google analytics (#723)","category":"page"},{"location":"release_notes/#[v1.6.6](https://github.com/odow/SDDP.jl/releases/tag/v1.6.6)-(September-29,-2023)","page":"Release notes","title":"v1.6.6 (September 29, 2023)","text":"","category":"section"},{"location":"release_notes/#Other-6","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated Example: two-stage newsvendor tutorial (#689)\nAdded a warning for people using SDDP.Statistical (#687)","category":"page"},{"location":"release_notes/#[v1.6.5](https://github.com/odow/SDDP.jl/releases/tag/v1.6.5)-(September-25,-2023)","page":"Release notes","title":"v1.6.5 (September 25, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-5","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed duplicate nodes in MarkovianGraph (#681)","category":"page"},{"location":"release_notes/#Other-7","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated tutorials (#677) (#678) (#682) (#683)\nFixed documentation preview (#679)","category":"page"},{"location":"release_notes/#[v1.6.4](https://github.com/odow/SDDP.jl/releases/tag/v1.6.4)-(September-23,-2023)","page":"Release notes","title":"v1.6.4 (September 23, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-6","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed error for invalid log_frequency values (#665)\nFixed objective sense in deterministic_equivalent (#673)","category":"page"},{"location":"release_notes/#Other-8","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation updates (#658) (#666) (#671)\nSwitch to GitHub action for deploying docs (#668) (#670)\nUpdate to Documenter@1 (#669)","category":"page"},{"location":"release_notes/#[v1.6.3](https://github.com/odow/SDDP.jl/releases/tag/v1.6.3)-(September-8,-2023)","page":"Release notes","title":"v1.6.3 (September 8, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-7","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed default stopping rule with iteration_limit or time_limit set (#662)","category":"page"},{"location":"release_notes/#Other-9","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Various documentation improvements (#651) (#657) (#659) (#660)","category":"page"},{"location":"release_notes/#[v1.6.2](https://github.com/odow/SDDP.jl/releases/tag/v1.6.2)-(August-24,-2023)","page":"Release notes","title":"v1.6.2 (August 24, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-8","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"MSPFormat now detect and exploit stagewise independent lattices (#653)\nFixed set_optimizer for models read from file (#654)","category":"page"},{"location":"release_notes/#Other-10","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed typo in pglib_opf.jl (#647)\nFixed documentation build and added color (#652)","category":"page"},{"location":"release_notes/#[v1.6.1](https://github.com/odow/SDDP.jl/releases/tag/v1.6.1)-(July-20,-2023)","page":"Release notes","title":"v1.6.1 (July 20, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-9","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed bugs in MSPFormat reader (#638) (#639)","category":"page"},{"location":"release_notes/#Other-11","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Clarified OutOfSampleMonteCarlo docstring (#643)","category":"page"},{"location":"release_notes/#[v1.6.0](https://github.com/odow/SDDP.jl/releases/tag/v1.6.0)-(July-3,-2023)","page":"Release notes","title":"v1.6.0 (July 3, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-4","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added RegularizedForwardPass (#624)\nAdded FirstStageStoppingRule (#634)","category":"page"},{"location":"release_notes/#Other-12","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Removed an unbound type parameter (#632)\nFixed typo in docstring (#633)\nAdded Here-and-now and hazard-decision tutorial (#635)","category":"page"},{"location":"release_notes/#[v1.5.1](https://github.com/odow/SDDP.jl/releases/tag/v1.5.1)-(June-30,-2023)","page":"Release notes","title":"v1.5.1 (June 30, 2023)","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"This release contains a number of minor code changes, but it has a large impact on the content that is printed to screen. In particular, we now log periodically, instead of each iteration, and a \"good\" stopping rule is used as the default if none are specified. Try using SDDP.train(model) to see the difference.","category":"page"},{"location":"release_notes/#Other-13","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed various typos in the documentation (#617)\nFixed printing test after changes in JuMP (#618)\nSet SimulationStoppingRule as the default stopping rule (#619)\nChanged the default logging frequency. Pass log_every_seconds = 0.0 to train to revert to the old behavior. (#620)\nAdded example usage with Distributions.jl (@slwu89) (#622)\nRemoved the numerical issue @warn (#627)\nImproved the quality of docstrings (#630)","category":"page"},{"location":"release_notes/#[v1.5.0](https://github.com/odow/SDDP.jl/releases/tag/v1.5.0)-(May-14,-2023)","page":"Release notes","title":"v1.5.0 (May 14, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-5","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added the ability to use a different model for the forward pass. This is a novel feature that lets you train better policies when the model is non-convex or does not have a well-defined dual. See the Alternative forward models tutorial in which we train convex and non-convex formulations of the optimal power flow problem. (#611)","category":"page"},{"location":"release_notes/#Other-14","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated missing changelog entries (#608)\nRemoved global variables (#610)\nConverted the Options struct to keyword arguments. This struct was a private implementation detail, but the change is breaking if you developed an extension to SDDP that touched these internals. (#612)\nFixed some typos (#613)","category":"page"},{"location":"release_notes/#[v1.4.0](https://github.com/odow/SDDP.jl/releases/tag/v1.4.0)-(May-8,-2023)","page":"Release notes","title":"v1.4.0 (May 8, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-6","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.SimulationStoppingRule (#598)\nAdded sampling_scheme argument to SDDP.write_to_file (#607)","category":"page"},{"location":"release_notes/#Fixed-10","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed parsing of some MSPFormat files (#602) (#604)\nFixed printing in header (#605)","category":"page"},{"location":"release_notes/#[v1.3.0](https://github.com/odow/SDDP.jl/releases/tag/v1.3.0)-(May-3,-2023)","page":"Release notes","title":"v1.3.0 (May 3, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-7","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added experimental support for SDDP.MSPFormat.read_from_file (#593)","category":"page"},{"location":"release_notes/#Other-15","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated to StochOptFormat v0.3 (#600)","category":"page"},{"location":"release_notes/#[v1.2.1](https://github.com/odow/SDDP.jl/releases/tag/v1.2.1)-(May-1,-2023)","page":"Release notes","title":"v1.2.1 (May 1, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-11","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed log_every_seconds (#597)","category":"page"},{"location":"release_notes/#[v1.2.0](https://github.com/odow/SDDP.jl/releases/tag/v1.2.0)-(May-1,-2023)","page":"Release notes","title":"v1.2.0 (May 1, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-8","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.SimulatorSamplingScheme (#594)\nAdded log_every_seconds argument to SDDP.train (#595)","category":"page"},{"location":"release_notes/#Other-16","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Tweaked how the log is printed (#588)\nUpdated to StochOptFormat v0.2 (#592)","category":"page"},{"location":"release_notes/#[v1.1.4](https://github.com/odow/SDDP.jl/releases/tag/v1.1.4)-(April-10,-2023)","page":"Release notes","title":"v1.1.4 (April 10, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-12","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Logs are now flushed every iteration (#584)","category":"page"},{"location":"release_notes/#Other-17","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added docstrings to various functions (#581)\nMinor documentation updates (#580)\nClarified integrality documentation (#582)\nUpdated the README (#585)\nNumber of numerical issues is now printed to the log (#586)","category":"page"},{"location":"release_notes/#[v1.1.3](https://github.com/odow/SDDP.jl/releases/tag/v1.1.3)-(April-2,-2023)","page":"Release notes","title":"v1.1.3 (April 2, 2023)","text":"","category":"section"},{"location":"release_notes/#Other-18","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed typo in Example: deterministic to stochastic tutorial (#578)\nFixed typo in documentation of SDDP.simulate (#577)","category":"page"},{"location":"release_notes/#[v1.1.2](https://github.com/odow/SDDP.jl/releases/tag/v1.1.2)-(March-18,-2023)","page":"Release notes","title":"v1.1.2 (March 18, 2023)","text":"","category":"section"},{"location":"release_notes/#Other-19","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added Example: deterministic to stochastic tutorial (#572)","category":"page"},{"location":"release_notes/#[v1.1.1](https://github.com/odow/SDDP.jl/releases/tag/v1.1.1)-(March-16,-2023)","page":"Release notes","title":"v1.1.1 (March 16, 2023)","text":"","category":"section"},{"location":"release_notes/#Other-20","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed email in Project.toml\nAdded notebook to documentation tutorials (#571)","category":"page"},{"location":"release_notes/#[v1.1.0](https://github.com/odow/SDDP.jl/releases/tag/v1.1.0)-(January-12,-2023)","page":"Release notes","title":"v1.1.0 (January 12, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-9","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added the node_name_parser argument to SDDP.write_cuts_to_file and added the option to skip nodes in SDDP.read_cuts_from_file (#565)","category":"page"},{"location":"release_notes/#[v1.0.0](https://github.com/odow/SDDP.jl/releases/tag/v1.0.0)-(January-3,-2023)","page":"Release notes","title":"v1.0.0 (January 3, 2023)","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Although we're bumping MAJOR version, this is a non-breaking release. Going forward:","category":"page"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"New features will bump the MINOR version\nBug fixes, maintenance, and documentation updates will bump the PATCH version\nWe will support only the Long Term Support (currently v1.6.7) and the latest patch (currently v1.8.4) releases of Julia. Updates to the LTS version will bump the MINOR version\nUpdates to the compat bounds of package dependencies will bump the PATCH version.","category":"page"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"We do not intend any breaking changes to the public API, which would require a new MAJOR release. The public API is everything defined in the documentation. Anything not in the documentation is considered private and may change in any PATCH release.","category":"page"},{"location":"release_notes/#Added-10","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added num_nodes argument to SDDP.UnicyclicGraph (#562)\nAdded support for passing an optimizer to SDDP.Asynchronous (#545)","category":"page"},{"location":"release_notes/#Other-21","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated Plotting tools to use live plots (#563)\nAdded vale as a linter (#565)\nImproved documentation for initializing a parallel scheme (#566)","category":"page"},{"location":"release_notes/#[v0.4.9](https://github.com/odow/SDDP.jl/releases/tag/v0.4.9)-(January-3,-2023)","page":"Release notes","title":"v0.4.9 (January 3, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-11","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.UnicyclicGraph (#556)","category":"page"},{"location":"release_notes/#Other-22","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added tutorial on Markov Decision Processes (#556)\nAdded two-stage newsvendor tutorial (#557)\nRefactored the layout of the documentation (#554) (#555)\nUpdated copyright to 2023 (#558)\nFixed errors in the documentation (#561)","category":"page"},{"location":"release_notes/#[v0.4.8](https://github.com/odow/SDDP.jl/releases/tag/v0.4.8)-(December-19,-2022)","page":"Release notes","title":"v0.4.8 (December 19, 2022)","text":"","category":"section"},{"location":"release_notes/#Added-12","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added terminate_on_cycle option to SDDP.Historical (#549)\nAdded include_last_node option to SDDP.DefaultForwardPass (#547)","category":"page"},{"location":"release_notes/#Fixed-13","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Reverted then fixed (#531) because it failed to account for problems with integer variables (#546) (#551)","category":"page"},{"location":"release_notes/#[v0.4.7](https://github.com/odow/SDDP.jl/releases/tag/v0.4.7)-(December-17,-2022)","page":"Release notes","title":"v0.4.7 (December 17, 2022)","text":"","category":"section"},{"location":"release_notes/#Added-13","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added initial_node support to InSampleMonteCarlo and OutOfSampleMonteCarlo (#535)","category":"page"},{"location":"release_notes/#Fixed-14","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Rethrow InterruptException when solver is interrupted (#534)\nFixed numerical recovery when we need dual solutions (#531) (Thanks @bfpc)\nFixed re-using the dashboard = true option between solves (#538)\nFixed bug when no @stageobjective is set (now defaults to 0.0) (#539)\nFixed errors thrown when invalid inputs are provided to add_objective_state (#540)","category":"page"},{"location":"release_notes/#Other-23","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Drop support for Julia versions prior to 1.6 (#533)\nUpdated versions of dependencies (#522) (#533)\nSwitched to HiGHS in the documentation and tests (#533)\nAdded license headers (#519)\nFixed link in air conditioning example (#521) (Thanks @conema)\nClarified variable naming in deterministic equivalent (#525) (Thanks @lucasprocessi)\nAdded this change log (#536)\nCuts are now written to model.cuts.json when numerical instability is discovered. This can aid debugging because it allows to you reload the cuts as of the iteration that caused the numerical issue (#537)","category":"page"},{"location":"release_notes/#[v0.4.6](https://github.com/odow/SDDP.jl/releases/tag/v0.4.6)-(March-25,-2022)","page":"Release notes","title":"v0.4.6 (March 25, 2022)","text":"","category":"section"},{"location":"release_notes/#Other-24","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated to JuMP v1.0 (#517)","category":"page"},{"location":"release_notes/#[v0.4.5](https://github.com/odow/SDDP.jl/releases/tag/v0.4.5)-(March-9,-2022)","page":"Release notes","title":"v0.4.5 (March 9, 2022)","text":"","category":"section"},{"location":"release_notes/#Fixed-15","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed issue with set_silent in a subproblem (#510)","category":"page"},{"location":"release_notes/#Other-25","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed many typos (#500) (#501) (#506) (#511) (Thanks @bfpc)\nUpdate to JuMP v0.23 (#514)\nAdded auto-regressive tutorial (#507)","category":"page"},{"location":"release_notes/#[v0.4.4](https://github.com/odow/SDDP.jl/releases/tag/v0.4.4)-(December-11,-2021)","page":"Release notes","title":"v0.4.4 (December 11, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-14","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added BanditDuality (#471)\nAdded benchmark scripts (#475) (#476) (#490)\nwrite_cuts_to_file now saves visited states (#468)","category":"page"},{"location":"release_notes/#Fixed-16","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed BoundStalling in a deterministic policy (#470) (#474)\nFixed magnitude warning with zero coefficients (#483)","category":"page"},{"location":"release_notes/#Other-26","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Improvements to LagrangianDuality (#481) (#482) (#487)\nImprovements to StrengthenedConicDuality (#486)\nSwitch to functional form for the tests (#478)\nFixed typos (#472) (Thanks @vfdev-5)\nUpdate to JuMP v0.22 (#498)","category":"page"},{"location":"release_notes/#[v0.4.3](https://github.com/odow/SDDP.jl/releases/tag/v0.4.3)-(August-31,-2021)","page":"Release notes","title":"v0.4.3 (August 31, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-15","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added biobjective solver (#462)\nAdded forward_pass_callback (#466)","category":"page"},{"location":"release_notes/#Other-27","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Update tutorials and documentation (#459) (#465)\nOrganize how paper materials are stored (#464)","category":"page"},{"location":"release_notes/#[v0.4.2](https://github.com/odow/SDDP.jl/releases/tag/v0.4.2)-(August-24,-2021)","page":"Release notes","title":"v0.4.2 (August 24, 2021)","text":"","category":"section"},{"location":"release_notes/#Fixed-17","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed a bug in Lagrangian duality (#457)","category":"page"},{"location":"release_notes/#[v0.4.1](https://github.com/odow/SDDP.jl/releases/tag/v0.4.1)-(August-23,-2021)","page":"Release notes","title":"v0.4.1 (August 23, 2021)","text":"","category":"section"},{"location":"release_notes/#Other-28","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Minor changes to our implementation of LagrangianDuality (#454) (#455)","category":"page"},{"location":"release_notes/#[v0.4.0](https://github.com/odow/SDDP.jl/releases/tag/v0.4.0)-(August-17,-2021)","page":"Release notes","title":"v0.4.0 (August 17, 2021)","text":"","category":"section"},{"location":"release_notes/#Breaking","page":"Release notes","title":"Breaking","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"A large refactoring for how we handle stochastic integer programs. This added support for things like SDDP.ContinuousConicDuality and SDDP.LagrangianDuality. It was breaking because we removed the integrality_handler argument to PolicyGraph. (#449) (#453)","category":"page"},{"location":"release_notes/#Other-29","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#447) (#448) (#450)","category":"page"},{"location":"release_notes/#[v0.3.17](https://github.com/odow/SDDP.jl/releases/tag/v0.3.17)-(July-6,-2021)","page":"Release notes","title":"v0.3.17 (July 6, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-16","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.PSRSamplingScheme (#426)","category":"page"},{"location":"release_notes/#Other-30","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Display more model attributes (#438)\nDocumentation improvements (#433) (#437) (#439)","category":"page"},{"location":"release_notes/#[v0.3.16](https://github.com/odow/SDDP.jl/releases/tag/v0.3.16)-(June-17,-2021)","page":"Release notes","title":"v0.3.16 (June 17, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-17","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.RiskAdjustedForwardPass (#413)\nAllow SDDP.Historical to sample sequentially (#420)","category":"page"},{"location":"release_notes/#Other-31","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Update risk measure docstrings (#418)","category":"page"},{"location":"release_notes/#[v0.3.15](https://github.com/odow/SDDP.jl/releases/tag/v0.3.15)-(June-1,-2021)","page":"Release notes","title":"v0.3.15 (June 1, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-18","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.StoppingChain","category":"page"},{"location":"release_notes/#Fixed-18","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed scoping bug in SDDP.@stageobjective (#407)\nFixed a bug when the initial point is infeasible (#411)\nSet subproblems to silent by default (#409)","category":"page"},{"location":"release_notes/#Other-32","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Add JuliaFormatter (#412)\nDocumentation improvements (#406) (#408)","category":"page"},{"location":"release_notes/#[v0.3.14](https://github.com/odow/SDDP.jl/releases/tag/v0.3.14)-(March-30,-2021)","page":"Release notes","title":"v0.3.14 (March 30, 2021)","text":"","category":"section"},{"location":"release_notes/#Fixed-19","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed O(N^2) behavior in get_same_children (#393)","category":"page"},{"location":"release_notes/#[v0.3.13](https://github.com/odow/SDDP.jl/releases/tag/v0.3.13)-(March-27,-2021)","page":"Release notes","title":"v0.3.13 (March 27, 2021)","text":"","category":"section"},{"location":"release_notes/#Fixed-20","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed bug in print.jl\nFixed compat of Reexport (#388)","category":"page"},{"location":"release_notes/#[v0.3.12](https://github.com/odow/SDDP.jl/releases/tag/v0.3.12)-(March-22,-2021)","page":"Release notes","title":"v0.3.12 (March 22, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-19","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added problem statistics to header (#385) (#386)","category":"page"},{"location":"release_notes/#Fixed-21","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed subtypes in visualization (#384)","category":"page"},{"location":"release_notes/#[v0.3.11](https://github.com/odow/SDDP.jl/releases/tag/v0.3.11)-(March-22,-2021)","page":"Release notes","title":"v0.3.11 (March 22, 2021)","text":"","category":"section"},{"location":"release_notes/#Fixed-22","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed constructor in direct mode (#383)","category":"page"},{"location":"release_notes/#Other-33","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fix documentation (#379)","category":"page"},{"location":"release_notes/#[v0.3.10](https://github.com/odow/SDDP.jl/releases/tag/v0.3.10)-(February-23,-2021)","page":"Release notes","title":"v0.3.10 (February 23, 2021)","text":"","category":"section"},{"location":"release_notes/#Fixed-23","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed seriescolor in publication plot (#376)","category":"page"},{"location":"release_notes/#[v0.3.9](https://github.com/odow/SDDP.jl/releases/tag/v0.3.9)-(February-20,-2021)","page":"Release notes","title":"v0.3.9 (February 20, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-20","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Add option to simulate with different incoming state (#372)\nAdded warning for cuts with high dynamic range (#373)","category":"page"},{"location":"release_notes/#Fixed-24","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed seriesalpha in publication plot (#375)","category":"page"},{"location":"release_notes/#[v0.3.8](https://github.com/odow/SDDP.jl/releases/tag/v0.3.8)-(January-19,-2021)","page":"Release notes","title":"v0.3.8 (January 19, 2021)","text":"","category":"section"},{"location":"release_notes/#Other-34","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#367) (#369) (#370)","category":"page"},{"location":"release_notes/#[v0.3.7](https://github.com/odow/SDDP.jl/releases/tag/v0.3.7)-(January-8,-2021)","page":"Release notes","title":"v0.3.7 (January 8, 2021)","text":"","category":"section"},{"location":"release_notes/#Other-35","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#362) (#363) (#365) (#366)\nBump copyright (#364)","category":"page"},{"location":"release_notes/#[v0.3.6](https://github.com/odow/SDDP.jl/releases/tag/v0.3.6)-(December-17,-2020)","page":"Release notes","title":"v0.3.6 (December 17, 2020)","text":"","category":"section"},{"location":"release_notes/#Other-36","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fix typos (#358)\nCollapse navigation bar in docs (#359)\nUpdate TagBot.yml (#361)","category":"page"},{"location":"release_notes/#[v0.3.5](https://github.com/odow/SDDP.jl/releases/tag/v0.3.5)-(November-18,-2020)","page":"Release notes","title":"v0.3.5 (November 18, 2020)","text":"","category":"section"},{"location":"release_notes/#Other-37","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Update citations (#348)\nSwitch to GitHub actions (#355)","category":"page"},{"location":"release_notes/#[v0.3.4](https://github.com/odow/SDDP.jl/releases/tag/v0.3.4)-(August-25,-2020)","page":"Release notes","title":"v0.3.4 (August 25, 2020)","text":"","category":"section"},{"location":"release_notes/#Added-21","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added non-uniform distributionally robust risk measure (#328)\nAdded numerical recovery functions (#330)\nAdded experimental StochOptFormat (#332) (#336) (#337) (#341) (#343) (#344)\nAdded entropic risk measure (#347)","category":"page"},{"location":"release_notes/#Other-38","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#327) (#333) (#339) (#340)","category":"page"},{"location":"release_notes/#[v0.3.3](https://github.com/odow/SDDP.jl/releases/tag/v0.3.3)-(June-19,-2020)","page":"Release notes","title":"v0.3.3 (June 19, 2020)","text":"","category":"section"},{"location":"release_notes/#Added-22","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added asynchronous support for price and belief states (#325)\nAdded ForwardPass plug-in system (#320)","category":"page"},{"location":"release_notes/#Fixed-25","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fix check for probabilities in Markovian graph (#322)","category":"page"},{"location":"release_notes/#[v0.3.2](https://github.com/odow/SDDP.jl/releases/tag/v0.3.2)-(April-6,-2020)","page":"Release notes","title":"v0.3.2 (April 6, 2020)","text":"","category":"section"},{"location":"release_notes/#Added-23","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added log_frequency argument to SDDP.train (#307)","category":"page"},{"location":"release_notes/#Other-39","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Improve error message in deterministic equivalent (#312)\nUpdate to RecipesBase 1.0 (#313)","category":"page"},{"location":"release_notes/#[v0.3.1](https://github.com/odow/SDDP.jl/releases/tag/v0.3.1)-(February-26,-2020)","page":"Release notes","title":"v0.3.1 (February 26, 2020)","text":"","category":"section"},{"location":"release_notes/#Fixed-26","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed filename in integrality_handlers.jl (#304)","category":"page"},{"location":"release_notes/#[v0.3.0](https://github.com/odow/SDDP.jl/releases/tag/v0.3.0)-(February-20,-2020)","page":"Release notes","title":"v0.3.0 (February 20, 2020)","text":"","category":"section"},{"location":"release_notes/#Breaking-2","page":"Release notes","title":"Breaking","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Breaking changes to update to JuMP v0.21 (#300).","category":"page"},{"location":"release_notes/#[v0.2.4](https://github.com/odow/SDDP.jl/releases/tag/v0.2.4)-(February-7,-2020)","page":"Release notes","title":"v0.2.4 (February 7, 2020)","text":"","category":"section"},{"location":"release_notes/#Added-24","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added a counter for the number of total subproblem solves (#301)","category":"page"},{"location":"release_notes/#Other-40","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Update formatter (#298)\nAdded tests (#299)","category":"page"},{"location":"release_notes/#[v0.2.3](https://github.com/odow/SDDP.jl/releases/tag/v0.2.3)-(January-24,-2020)","page":"Release notes","title":"v0.2.3 (January 24, 2020)","text":"","category":"section"},{"location":"release_notes/#Added-25","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added support for convex risk measures (#294)","category":"page"},{"location":"release_notes/#Fixed-27","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed bug when subproblem is infeasible (#296)\nFixed bug in deterministic equivalent (#297)","category":"page"},{"location":"release_notes/#Other-41","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added example from IJOC paper (#293)","category":"page"},{"location":"release_notes/#[v0.2.2](https://github.com/odow/SDDP.jl/releases/tag/v0.2.2)-(January-10,-2020)","page":"Release notes","title":"v0.2.2 (January 10, 2020)","text":"","category":"section"},{"location":"release_notes/#Fixed-28","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed flakey time limit in tests (#291)","category":"page"},{"location":"release_notes/#Other-42","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Removed MathOptFormat.jl (#289)\nUpdate copyright (#290)","category":"page"},{"location":"release_notes/#[v0.2.1](https://github.com/odow/SDDP.jl/releases/tag/v0.2.1)-(December-19,-2019)","page":"Release notes","title":"v0.2.1 (December 19, 2019)","text":"","category":"section"},{"location":"release_notes/#Added-26","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added support for approximating a Markov lattice (#282) (#285)\nAdd tools for visualizing the value function (#272) (#286)\nWrite .mof.json files on error (#284)","category":"page"},{"location":"release_notes/#Other-43","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Improve documentation (#281) (#283)\nUpdate tests for Julia 1.3 (#287)","category":"page"},{"location":"release_notes/#[v0.2.0](https://github.com/odow/SDDP.jl/releases/tag/v0.2.0)-(December-16,-2019)","page":"Release notes","title":"v0.2.0 (December 16, 2019)","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"This version added the asynchronous parallel implementation with a few minor breaking changes in how we iterated internally. It didn't break basic user-facing models, only implementations that implemented some of the extension features. It probably could have been a v1.1 release.","category":"page"},{"location":"release_notes/#Added-27","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added asynchronous parallel implementation (#277)\nAdded roll-out algorithm for cyclic graphs (#279)","category":"page"},{"location":"release_notes/#Other-44","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Improved error messages in PolicyGraph (#271)\nAdded JuliaFormatter (#273) (#276)\nFixed compat bounds (#274) (#278)\nAdded documentation for simulating non-standard graphs (#280)","category":"page"},{"location":"release_notes/#[v0.1.0](https://github.com/odow/SDDP.jl/releases/tag/v0.1.0)-(October-17,-2019)","page":"Release notes","title":"v0.1.0 (October 17, 2019)","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"A complete rewrite of SDDP.jl based on the policy graph framework. This was essentially a new package. It has minimal code in common with the previous implementation.","category":"page"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Development started on September 28, 2018 in Kokako.jl, and the code was merged into SDDP.jl on March 14, 2019.","category":"page"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"The pull request SDDP.jl#180 lists the 29 issues that the rewrite closed.","category":"page"},{"location":"release_notes/#[v0.0.1](https://github.com/odow/SDDP.jl/releases/tag/v0.0.1)-(April-18,-2018)","page":"Release notes","title":"v0.0.1 (April 18, 2018)","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Initial release. Development had been underway since January 22, 2016 in the StochDualDynamicProgram.jl repository. The last development commit there was April 5, 2017. Work then continued in this repository for a year before the first tagged release.","category":"page"},{"location":"examples/asset_management_stagewise/","page":"Asset management with modifications","title":"Asset management with modifications","text":"EditURL = \"asset_management_stagewise.jl\"","category":"page"},{"location":"examples/asset_management_stagewise/#Asset-management-with-modifications","page":"Asset management with modifications","title":"Asset management with modifications","text":"","category":"section"},{"location":"examples/asset_management_stagewise/","page":"Asset management with modifications","title":"Asset management with modifications","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/asset_management_stagewise/","page":"Asset management with modifications","title":"Asset management with modifications","text":"A modified version of the Asset Management Problem Taken from the book J.R. Birge, F. Louveaux, Introduction to Stochastic Programming, Springer Series in Operations Research and Financial Engineering, Springer New York, New York, NY, 2011","category":"page"},{"location":"examples/asset_management_stagewise/","page":"Asset management with modifications","title":"Asset management with modifications","text":"using SDDP, HiGHS, Test\n\nfunction asset_management_stagewise(; cut_type)\n w_s = [1.25, 1.06]\n w_b = [1.14, 1.12]\n Phi = [-1, 5]\n Psi = [0.02, 0.0]\n\n model = SDDP.MarkovianPolicyGraph(;\n sense = :Max,\n transition_matrices = Array{Float64,2}[\n [1.0]',\n [0.5 0.5],\n [0.5 0.5; 0.5 0.5],\n [0.5 0.5; 0.5 0.5],\n ],\n upper_bound = 1000.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n t, i = node\n @variable(subproblem, xs >= 0, SDDP.State, initial_value = 0)\n @variable(subproblem, xb >= 0, SDDP.State, initial_value = 0)\n if t == 1\n @constraint(subproblem, xs.out + xb.out == 55 + xs.in + xb.in)\n @stageobjective(subproblem, 0)\n elseif t == 2 || t == 3\n @variable(subproblem, phi)\n @constraint(\n subproblem,\n w_s[i] * xs.in + w_b[i] * xb.in + phi == xs.out + xb.out\n )\n SDDP.parameterize(subproblem, [1, 2], [0.6, 0.4]) do ω\n JuMP.fix(phi, Phi[ω])\n @stageobjective(subproblem, Psi[ω] * xs.out)\n end\n else\n @variable(subproblem, u >= 0)\n @variable(subproblem, v >= 0)\n @constraint(\n subproblem,\n w_s[i] * xs.in + w_b[i] * xb.in + u - v == 80,\n )\n @stageobjective(subproblem, -4u + v)\n end\n end\n SDDP.train(\n model;\n cut_type = cut_type,\n log_frequency = 10,\n risk_measure = (node) -> begin\n if node[1] != 3\n SDDP.Expectation()\n else\n SDDP.EAVaR(; lambda = 0.5, beta = 0.5)\n end\n end,\n )\n @test SDDP.calculate_bound(model) ≈ 1.278 atol = 1e-3\n return\nend\n\nasset_management_stagewise(; cut_type = SDDP.SINGLE_CUT)\n\nasset_management_stagewise(; cut_type = SDDP.MULTI_CUT)","category":"page"},{"location":"guides/choose_a_stopping_rule/#Choose-a-stopping-rule","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"","category":"section"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"The theory of SDDP tells us that the algorithm converges to an optimal policy almost surely in a finite number of iterations. In practice, this number is very large. Therefore, we need some way of pre-emptively terminating SDDP when the solution is “good enough.” We call heuristics for pre-emptively terminating SDDP stopping rules.","category":"page"},{"location":"guides/choose_a_stopping_rule/#Basic-limits","page":"Choose a stopping rule","title":"Basic limits","text":"","category":"section"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"The training of an SDDP policy can be terminated after a fixed number of iterations using the iteration_limit keyword.","category":"page"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"SDDP.train(model; iteration_limit = 10)","category":"page"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"The training of an SDDP policy can be terminated after a fixed number of seconds using the time_limit keyword.","category":"page"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"SDDP.train(model; time_limit = 2.0)","category":"page"},{"location":"guides/choose_a_stopping_rule/#Stopping-rules","page":"Choose a stopping rule","title":"Stopping rules","text":"","category":"section"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"In addition to the limits provided as keyword arguments, a variety of other stopping rules are available. These can be passed to SDDP.train as a vector to the stopping_rules keyword. Training stops if any of the rules becomes active. To stop when all of the rules become active, use SDDP.StoppingChain. For example:","category":"page"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"# Terminate if BoundStalling becomes true\nSDDP.train(\n model;\n stopping_rules = [SDDP.BoundStalling(10, 1e-4)],\n)\n\n# Terminate if BoundStalling OR TimeLimit becomes true\nSDDP.train(\n model; \n stopping_rules = [SDDP.BoundStalling(10, 1e-4), SDDP.TimeLimit(100.0)],\n)\n\n# Terminate if BoundStalling AND TimeLimit becomes true\nSDDP.train(\n model; \n stopping_rules = [\n SDDP.StoppingChain(SDDP.BoundStalling(10, 1e-4), SDDP.TimeLimit(100.0)),\n ],\n)","category":"page"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"See Stopping rules for a list of stopping rules supported by SDDP.jl.","category":"page"},{"location":"examples/belief/","page":"Partially observable inventory management","title":"Partially observable inventory management","text":"EditURL = \"belief.jl\"","category":"page"},{"location":"examples/belief/#Partially-observable-inventory-management","page":"Partially observable inventory management","title":"Partially observable inventory management","text":"","category":"section"},{"location":"examples/belief/","page":"Partially observable inventory management","title":"Partially observable inventory management","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/belief/","page":"Partially observable inventory management","title":"Partially observable inventory management","text":"using SDDP, HiGHS, Random, Statistics, Test\n\nfunction inventory_management_problem()\n demand_values = [1.0, 2.0]\n demand_prob = Dict(:Ah => [0.2, 0.8], :Bh => [0.8, 0.2])\n graph = SDDP.Graph(\n :root_node,\n [:Ad, :Ah, :Bd, :Bh],\n [\n (:root_node => :Ad, 0.5),\n (:root_node => :Bd, 0.5),\n (:Ad => :Ah, 1.0),\n (:Ah => :Ad, 0.8),\n (:Ah => :Bd, 0.1),\n (:Bd => :Bh, 1.0),\n (:Bh => :Bd, 0.8),\n (:Bh => :Ad, 0.1),\n ],\n )\n SDDP.add_ambiguity_set(graph, [:Ad, :Bd], 1e2)\n SDDP.add_ambiguity_set(graph, [:Ah, :Bh], 1e2)\n\n model = SDDP.PolicyGraph(\n graph;\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n @variables(\n subproblem,\n begin\n 0 <= inventory <= 2, (SDDP.State, initial_value = 0.0)\n buy >= 0\n demand\n end\n )\n @constraint(subproblem, demand == inventory.in - inventory.out + buy)\n if node == :Ad || node == :Bd || node == :D\n JuMP.fix(demand, 0)\n @stageobjective(subproblem, buy)\n else\n SDDP.parameterize(subproblem, demand_values, demand_prob[node]) do ω\n return JuMP.fix(demand, ω)\n end\n @stageobjective(subproblem, 2 * buy + inventory.out)\n end\n end\n # Train the policy.\n Random.seed!(123)\n SDDP.train(\n model;\n iteration_limit = 100,\n cut_type = SDDP.SINGLE_CUT,\n log_frequency = 10,\n parallel_scheme = SDDP.Serial(),\n )\n results = SDDP.simulate(model, 500; parallel_scheme = SDDP.Serial())\n objectives =\n [sum(s[:stage_objective] for s in simulation) for simulation in results]\n sample_mean = round(Statistics.mean(objectives); digits = 2)\n sample_ci = round(1.96 * Statistics.std(objectives) / sqrt(500); digits = 2)\n @test SDDP.calculate_bound(model) ≈ sample_mean atol = sample_ci\n return\nend\n\ninventory_management_problem()","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"EditURL = \"decision_hazard.jl\"","category":"page"},{"location":"tutorial/decision_hazard/#Here-and-now-and-hazard-decision","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"","category":"section"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"SDDP.jl assumes that the agent gets to make a decision after observing the realization of the random variable. This is called a wait-and-see or hazard-decision model. In contrast, you might want your agent to make decisions before observing the random variable. This is called a here-and-now or decision-hazard model.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"info: Info\nThe terms decision-hazard and hazard-decision from the French hasard, meaning chance. It could also have been translated as uncertainty-decision and decision-uncertainty, but the community seems to have settled on the transliteration hazard instead. We like the hazard-decision and decision-hazard terms because they clearly communicate the order of the decision and the uncertainty.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"The purpose of this tutorial is to demonstrate how to model here-and-now decisions in SDDP.jl.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"This tutorial uses the following packages:","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"using SDDP\nimport HiGHS","category":"page"},{"location":"tutorial/decision_hazard/#Hazard-decision-formulation","page":"Here-and-now and hazard-decision","title":"Hazard-decision formulation","text":"","category":"section"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"As an example, we're going to build a standard hydro-thermal scheduling model, with a single hydro-reservoir and a single thermal generation plant. In each of the four stages, we need to choose some mix of u_thermal and u_hydro to meet a demand of 9 units, where unmet demand is penalized at a rate of $500/unit.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"hazard_decision = SDDP.LinearPolicyGraph(;\n stages = 4,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, node\n @variables(sp, begin\n 0 <= x_storage <= 8, (SDDP.State, initial_value = 6)\n u_thermal >= 0\n u_hydro >= 0\n u_unmet_demand >= 0\n end)\n @constraint(sp, u_thermal + u_hydro == 9 - u_unmet_demand)\n @constraint(sp, c_balance, x_storage.out == x_storage.in - u_hydro + 0)\n SDDP.parameterize(sp, [2, 3]) do ω_inflow\n return set_normalized_rhs(c_balance, ω_inflow)\n end\n @stageobjective(sp, 500 * u_unmet_demand + 20 * u_thermal)\nend","category":"page"},{"location":"tutorial/decision_hazard/#Decision-hazard-formulation","page":"Here-and-now and hazard-decision","title":"Decision-hazard formulation","text":"","category":"section"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"In the wait-and-see formulation, we get to decide the generation variables after observing the realization of ω_inflow. However, a common modeling situation is that we need to decide the level of thermal generation u_thermal before observing the inflow.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"SDDP.jl can model here-and-now decisions with a modeling trick: a wait-and-see decision in stage t-1 is equivalent to a here-and-now decision in stage t.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"In other words, we need to convert the u_thermal decision from a control variable that is decided in stage t, to a state variable that is decided in stage t-1. Here's our new model, with the three lines that have changed:","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"decision_hazard = SDDP.LinearPolicyGraph(;\n stages = 4,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, node\n @variables(sp, begin\n 0 <= x_storage <= 8, (SDDP.State, initial_value = 6)\n u_thermal >= 0, (SDDP.State, initial_value = 0) # <-- changed\n u_hydro >= 0\n u_unmet_demand >= 0\n end)\n @constraint(sp, u_thermal.in + u_hydro == 9 - u_unmet_demand) # <-- changed\n @constraint(sp, c_balance, x_storage.out == x_storage.in - u_hydro + 0)\n SDDP.parameterize(sp, [2, 3]) do ω\n return set_normalized_rhs(c_balance, ω)\n end\n @stageobjective(sp, 500 * u_unmet_demand + 20 * u_thermal.in) # <-- changed\nend","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"Can you understand the reformulation? In each stage, we now use the value of u_thermal.in instead of u_thermal, and the value of the outgoing u_thermal.out is the here-and-how decision for stage t+1.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"(If you can spot a \"mistake\" with this model, don't worry, we'll fix it below. Presenting it like this simplifies the exposition.)","category":"page"},{"location":"tutorial/decision_hazard/#Comparison","page":"Here-and-now and hazard-decision","title":"Comparison","text":"","category":"section"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"Let's compare the cost of operating the two models:","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"function train_and_compute_cost(model)\n SDDP.train(model; print_level = 0)\n return println(\"Cost = \\$\", SDDP.calculate_bound(model))\nend\n\ntrain_and_compute_cost(hazard_decision)","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"train_and_compute_cost(decision_hazard)","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"This suggests that choosing the thermal generation before observing the inflow adds a cost of $250. But does this make sense?","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"If we look carefully at our decision_hazard model, the incoming value of u_thermal.in in the first stage is fixed to the initial_value of 0. Therefore, we must always meet the full demand with u_hydro, which we cannot do without incurring unmet demand.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"To allow the model to choose an optimal level of u_thermal in the first-stage, we need to add an extra stage that is deterministic with no stage objective.","category":"page"},{"location":"tutorial/decision_hazard/#Fixing-the-decision-hazard","page":"Here-and-now and hazard-decision","title":"Fixing the decision-hazard","text":"","category":"section"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"In the following model, we now have five stages, so that stage t+1 in decision_hazard_2 corresponds to stage t in decision_hazard. We've also added an if-statement, which adds different constraints depending on the node. Note that we need to add an x_storage.out == x_storage.in constraint because the storage can't change in this new first-stage.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"decision_hazard_2 = SDDP.LinearPolicyGraph(;\n stages = 5, # <-- changed\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, node\n @variables(sp, begin\n 0 <= x_storage <= 8, (SDDP.State, initial_value = 6)\n u_thermal >= 0, (SDDP.State, initial_value = 0)\n u_hydro >= 0\n u_unmet_demand >= 0\n end)\n if node == 1 # <-- new\n @constraint(sp, x_storage.out == x_storage.in) # <-- new\n @stageobjective(sp, 0) # <-- new\n else\n @constraint(sp, u_thermal.in + u_hydro == 9 - u_unmet_demand)\n @constraint(sp, c_balance, x_storage.out == x_storage.in - u_hydro + 0)\n SDDP.parameterize(sp, [2, 3]) do ω\n return set_normalized_rhs(c_balance, ω)\n end\n @stageobjective(sp, 500 * u_unmet_demand + 20 * u_thermal.in)\n end\nend\n\ntrain_and_compute_cost(decision_hazard_2)","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"Now we find that the cost of choosing the thermal generation before observing the inflow adds a much more reasonable cost of $10.","category":"page"},{"location":"tutorial/decision_hazard/#Summary","page":"Here-and-now and hazard-decision","title":"Summary","text":"","category":"section"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"To summarize, the difference between here-and-now and wait-and-see variables is a modeling choice.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"To create a here-and-now decision, add it as a state variable to the previous stage","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"In some cases, you'll need to add an additional \"first-stage\" problem to enable the model to choose an optimal value for the here-and-now decision variable. You do not need to do this if the first stage is deterministic. You must make sure that the subproblem is feasible for all possible incoming values of the here-and-now decision variable.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"EditURL = \"pglib_opf.jl\"","category":"page"},{"location":"tutorial/pglib_opf/#Alternative-forward-models","page":"Alternative forward models","title":"Alternative forward models","text":"","category":"section"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"This example demonstrates how to train convex and non-convex models.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"This example uses the following packages:","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"using SDDP\nimport Ipopt\nimport PowerModels\nimport Test","category":"page"},{"location":"tutorial/pglib_opf/#Formulation","page":"Alternative forward models","title":"Formulation","text":"","category":"section"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"For our model, we build a simple optimal power flow model with a single hydro-electric generator.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"The formulation of our optimal power flow problem depends on model_type, which must be one of the PowerModels formulations.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"(To run locally, download pglib_opf_case5_pjm.m and update filename appropriately.)","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"function build_model(model_type)\n filename = joinpath(@__DIR__, \"pglib_opf_case5_pjm.m\")\n data = PowerModels.parse_file(filename)\n return SDDP.PolicyGraph(\n SDDP.UnicyclicGraph(0.95);\n sense = :Min,\n lower_bound = 0.0,\n optimizer = Ipopt.Optimizer,\n ) do sp, t\n power_model = PowerModels.instantiate_model(\n data,\n model_type,\n PowerModels.build_opf;\n jump_model = sp,\n )\n # Now add hydro power models. Assume that generator 5 is hydro, and the\n # rest are thermal.\n pg = power_model.var[:it][:pm][:nw][0][:pg][5]\n sp[:pg] = pg\n @variable(sp, x >= 0, SDDP.State, initial_value = 10.0)\n @variable(sp, deficit >= 0)\n @constraint(sp, balance, x.out == x.in - pg + deficit)\n @stageobjective(sp, objective_function(sp) + 1e6 * deficit)\n SDDP.parameterize(sp, [0, 2, 5]) do ω\n return SDDP.set_normalized_rhs(balance, ω)\n end\n return\n end\nend","category":"page"},{"location":"tutorial/pglib_opf/#Training-a-convex-model","page":"Alternative forward models","title":"Training a convex model","text":"","category":"section"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"We can build and train a convex approximation of the optimal power flow problem.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"The problem with the convex model is that it does not accurately simulate the true dynamics of the problem. Therefore, it under-estimates the true cost of operation.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"convex = build_model(PowerModels.DCPPowerModel)\nSDDP.train(convex; iteration_limit = 10)","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"To more accurately simulate the dynamics of the problem, a common approach is to write the cuts representing the policy to a file, and then read them into a non-convex model:","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"SDDP.write_cuts_to_file(convex, \"convex.cuts.json\")\nnon_convex = build_model(PowerModels.ACPPowerModel)\nSDDP.read_cuts_from_file(non_convex, \"convex.cuts.json\")","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"Now we can simulate non_convex to evaluate the policy.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"result = SDDP.simulate(non_convex, 1)","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"A problem with reading and writing the cuts to file is that the cuts have been generated from trial points of the convex model. Therefore, the policy may be arbitrarily bad at points visited by the non-convex model.","category":"page"},{"location":"tutorial/pglib_opf/#Training-a-non-convex-model","page":"Alternative forward models","title":"Training a non-convex model","text":"","category":"section"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"We can also build and train a non-convex formulation of the optimal power flow problem.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"The problem with the non-convex model is that because it is non-convex, SDDP.jl may find a sub-optimal policy. Therefore, it may over-estimate the true cost of operation.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"non_convex = build_model(PowerModels.ACPPowerModel)\nSDDP.train(non_convex; iteration_limit = 10)\nresult = SDDP.simulate(non_convex, 1)","category":"page"},{"location":"tutorial/pglib_opf/#Combining-convex-and-non-convex-models","page":"Alternative forward models","title":"Combining convex and non-convex models","text":"","category":"section"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"To summarize, training with the convex model constructs cuts at points that may never be visited by the non-convex model, and training with the non-convex model may construct arbitrarily poor cuts because a key assumption of SDDP is convexity.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"As a compromise, we can train a policy using a combination of the convex and non-convex models; we'll use the non-convex model to generate trial points on the forward pass, and we'll use the convex model to build cuts on the backward pass.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"convex = build_model(PowerModels.DCPPowerModel)","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"non_convex = build_model(PowerModels.ACPPowerModel)","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"To do so, we train convex using the SDDP.AlternativeForwardPass forward pass, which simulates the model using non_convex, and we use SDDP.AlternativePostIterationCallback as a post-iteration callback, which copies cuts from the convex model back into the non_convex model.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"SDDP.train(\n convex;\n forward_pass = SDDP.AlternativeForwardPass(non_convex),\n post_iteration_callback = SDDP.AlternativePostIterationCallback(non_convex),\n iteration_limit = 10,\n)","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"In practice, if we were to simulate non_convex now, we should obtain a better policy than either of the two previous approaches.","category":"page"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = SDDP","category":"page"},{"location":"","page":"Home","title":"Home","text":"\"logo\"","category":"page"},{"location":"#Introduction","page":"Home","title":"Introduction","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"(Image: Build Status) (Image: code coverage)","category":"page"},{"location":"","page":"Home","title":"Home","text":"Welcome to SDDP.jl, a package for solving large convex multistage stochastic programming problems using stochastic dual dynamic programming.","category":"page"},{"location":"","page":"Home","title":"Home","text":"SDDP.jl is built on JuMP, so it supports a number of open-source and commercial solvers, making it a powerful and flexible tool for stochastic optimization.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The implementation of the stochastic dual dynamic programming algorithm in SDDP.jl is state of the art, and it includes support for a number of advanced features not commonly found in other implementations. This includes support for:","category":"page"},{"location":"","page":"Home","title":"Home","text":"infinite horizon problems\nconvex risk measures\nmixed-integer state and control variables\npartially observable stochastic processes.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Install SDDP.jl as follows:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> import Pkg\n\njulia> Pkg.add(\"SDDP\")","category":"page"},{"location":"#License","page":"Home","title":"License","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"SDDP.jl is licensed under the MPL 2.0 license.","category":"page"},{"location":"#Resources-for-getting-started","page":"Home","title":"Resources for getting started","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"There are a few ways to get started with SDDP.jl:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Become familiar with JuMP by reading the JuMP documentation\nRead the introductory tutorial An introduction to SDDP.jl\nBrowse some of the examples, such as Example: deterministic to stochastic","category":"page"},{"location":"#Getting-help","page":"Home","title":"Getting help","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"If you need help, please open a GitHub issue.","category":"page"},{"location":"#How-the-documentation-is-structured","page":"Home","title":"How the documentation is structured","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Having a high-level overview of how this documentation is structured will help you know where to look for certain things.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Tutorials contains step-by-step explanations of how to use SDDP.jl. Once you've got SDDP.jl installed, start by reading An introduction to SDDP.jl.\nGuides contains \"how-to\" snippets that demonstrate specific topics within SDDP.jl. A good one to get started on is Debug a model.\nExplanation contains step-by-step explanations of the theory and algorithms that underpin SDDP.jl. If you want a basic understanding of the algorithm behind SDDP.jl, start with Introductory theory.\nExamples contain worked examples of various problems solved using SDDP.jl. A good one to get started on is the Hydro-thermal scheduling problem. In particular, it shows how to solve an infinite horizon problem.\nThe API Reference contains a complete list of the functions you can use in SDDP.jl. Look here if you want to know how to use a particular function.","category":"page"},{"location":"#Citing-SDDP.jl","page":"Home","title":"Citing SDDP.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"If you use SDDP.jl, we ask that you please cite the following:","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{dowson_sddp.jl,\n\ttitle = {{SDDP}.jl: a {Julia} package for stochastic dual dynamic programming},\n\tjournal = {INFORMS Journal on Computing},\n\tauthor = {Dowson, O. and Kapelevich, L.},\n\tdoi = {https://doi.org/10.1287/ijoc.2020.0987},\n\tyear = {2021},\n\tvolume = {33},\n\tissue = {1},\n\tpages = {27-33},\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"Here is an earlier preprint.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you use the infinite horizon functionality, we ask that you please cite the following:","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{dowson_policy_graph,\n\ttitle = {The policy graph decomposition of multistage stochastic optimization problems},\n\tdoi = {https://doi.org/10.1002/net.21932},\n\tjournal = {Networks},\n\tauthor = {Dowson, O.},\n\tvolume = {76},\n\tissue = {1},\n\tpages = {3-23},\n\tyear = {2020}\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"Here is an earlier preprint.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you use the partially observable functionality, we ask that you please cite the following:","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{dowson_pomsp,\n\ttitle = {Partially observable multistage stochastic programming},\n\tdoi = {https://doi.org/10.1016/j.orl.2020.06.005},\n\tjournal = {Operations Research Letters},\n\tauthor = {Dowson, O. and Morton, D.P. and Pagnoncelli, B.K.},\n\tvolume = {48},\n\tissue = {4},\n\tpages = {505-512},\n\tyear = {2020}\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"Here is an earlier preprint.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you use the objective state functionality, we ask that you please cite the following:","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{downward_objective,\n\ttitle = {Stochastic dual dynamic programming with stagewise-dependent objective uncertainty},\n\tdoi = {https://doi.org/10.1016/j.orl.2019.11.002},\n\tjournal = {Operations Research Letters},\n\tauthor = {Downward, A. and Dowson, O. and Baucke, R.},\n\tvolume = {48},\n\tissue = {1},\n\tpages = {33-39},\n\tyear = {2020}\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"Here is an earlier preprint.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you use the entropic risk measure, we ask that you please cite the following:","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{dowson_entropic,\n\ttitle = {Incorporating convex risk measures into multistage stochastic programming algorithms},\n\tdoi = {https://doi.org/10.1007/s10479-022-04977-w},\n\tjournal = {Annals of Operations Research},\n\tauthor = {Dowson, O. and Morton, D.P. and Pagnoncelli, B.K.},\n\tyear = {2022},\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"Here is an earlier preprint.","category":"page"},{"location":"examples/all_blacks/","page":"Deterministic All Blacks","title":"Deterministic All Blacks","text":"EditURL = \"all_blacks.jl\"","category":"page"},{"location":"examples/all_blacks/#Deterministic-All-Blacks","page":"Deterministic All Blacks","title":"Deterministic All Blacks","text":"","category":"section"},{"location":"examples/all_blacks/","page":"Deterministic All Blacks","title":"Deterministic All Blacks","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/all_blacks/","page":"Deterministic All Blacks","title":"Deterministic All Blacks","text":"using SDDP, HiGHS, Test\n\nfunction all_blacks()\n # Number of time periods, number of seats, R_ij = revenue from selling seat\n # i at time j, offer_ij = whether an offer for seat i will come at time j\n (T, N, R, offer) = (3, 2, [3 3 6; 3 3 6], [1 1 0; 1 0 1])\n model = SDDP.LinearPolicyGraph(;\n stages = T,\n sense = :Max,\n upper_bound = 100.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, stage\n # Seat remaining?\n @variable(sp, 0 <= x[1:N] <= 1, SDDP.State, Bin, initial_value = 1)\n # Action: accept offer, or don't accept offer\n @variable(sp, accept_offer, Bin)\n # Balance on seats\n @constraint(\n sp,\n [i in 1:N],\n x[i].out == x[i].in - offer[i, stage] * accept_offer\n )\n @stageobjective(\n sp,\n sum(R[i, stage] * offer[i, stage] * accept_offer for i in 1:N)\n )\n end\n SDDP.train(model; duality_handler = SDDP.LagrangianDuality())\n @test SDDP.calculate_bound(model) ≈ 9.0\n return\nend\n\nall_blacks()","category":"page"},{"location":"examples/sldp_example_one/","page":"SLDP: example 1","title":"SLDP: example 1","text":"EditURL = \"sldp_example_one.jl\"","category":"page"},{"location":"examples/sldp_example_one/#SLDP:-example-1","page":"SLDP: example 1","title":"SLDP: example 1","text":"","category":"section"},{"location":"examples/sldp_example_one/","page":"SLDP: example 1","title":"SLDP: example 1","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/sldp_example_one/","page":"SLDP: example 1","title":"SLDP: example 1","text":"This example is derived from Section 4.2 of the paper: Ahmed, S., Cabral, F. G., & da Costa, B. F. P. (2019). Stochastic Lipschitz Dynamic Programming. Optimization Online. PDF","category":"page"},{"location":"examples/sldp_example_one/","page":"SLDP: example 1","title":"SLDP: example 1","text":"using SDDP, HiGHS, Test\n\nfunction sldp_example_one()\n model = SDDP.LinearPolicyGraph(;\n stages = 8,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n @variable(sp, x, SDDP.State, initial_value = 2.0)\n @variables(sp, begin\n x⁺ >= 0\n x⁻ >= 0\n 0 <= u <= 1, Bin\n ω\n end)\n @stageobjective(sp, 0.9^(t - 1) * (x⁺ + x⁻))\n @constraints(sp, begin\n x.out == x.in + 2 * u - 1 + ω\n x⁺ >= x.out\n x⁻ >= -x.out\n end)\n points = [\n -0.3089653673606697,\n -0.2718277412744214,\n -0.09611178608243474,\n 0.24645863921577763,\n 0.5204224537256875,\n ]\n return SDDP.parameterize(φ -> JuMP.fix(ω, φ), sp, [points; -points])\n end\n SDDP.train(model; log_frequency = 10)\n @test SDDP.calculate_bound(model) <= 1.1675\n return\nend\n\nsldp_example_one()","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/#Simulate-using-a-different-sampling-scheme","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"","category":"section"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"DocTestSetup = quote\n using SDDP, HiGHS\nend","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"By default, SDDP.simulate will simulate the policy using the distributions of noise terms that were defined when the model was created. We call these in-sample simulations. However, in general the in-sample distributions are an approximation of some underlying probability model which we term the true process. Therefore, SDDP.jl makes it easy to simulate the policy using different probability distributions.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"To demonstrate the different ways of simulating the policy, we're going to use the model from the tutorial Markovian policy graphs.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> using SDDP, HiGHS\n\njulia> Ω = [\n (inflow = 0.0, fuel_multiplier = 1.5),\n (inflow = 50.0, fuel_multiplier = 1.0),\n (inflow = 100.0, fuel_multiplier = 0.75),\n ]\n3-element Vector{@NamedTuple{inflow::Float64, fuel_multiplier::Float64}}:\n (inflow = 0.0, fuel_multiplier = 1.5)\n (inflow = 50.0, fuel_multiplier = 1.0)\n (inflow = 100.0, fuel_multiplier = 0.75)\n\njulia> model = SDDP.MarkovianPolicyGraph(\n transition_matrices = Array{Float64, 2}[\n [1.0]',\n [0.75 0.25],\n [0.75 0.25; 0.25 0.75],\n ],\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n # Unpack the stage and Markov index.\n t, markov_state = node\n # Define the state variable.\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Define the control variables.\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n # Define the constraints\n @constraints(subproblem, begin\n volume.out == volume.in + inflow - hydro_generation - hydro_spill\n thermal_generation + hydro_generation == 150.0\n end)\n # Note how we can use `markov_state` to dispatch an `if` statement.\n probability = if markov_state == 1 # wet climate state\n [1 / 6, 1 / 3, 1 / 2]\n else # dry climate state\n [1 / 2, 1 / 3, 1 / 6]\n end\n fuel_cost = [50.0, 100.0, 150.0]\n SDDP.parameterize(subproblem, Ω, probability) do ω\n JuMP.fix(inflow, ω.inflow)\n @stageobjective(\n subproblem,\n ω.fuel_multiplier * fuel_cost[t] * thermal_generation,\n )\n return\n end\n return\n end\nA policy graph with 5 nodes.\n Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)\n\n\njulia> SDDP.train(model; iteration_limit = 10, print_level = 0);","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/#In-sample-Monte-Carlo-simulation","page":"Simulate using a different sampling scheme","title":"In-sample Monte Carlo simulation","text":"","category":"section"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"To simulate the policy using the data defined when model was created, use SDDP.InSampleMonteCarlo.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> simulations = SDDP.simulate(\n model,\n 20;\n sampling_scheme = SDDP.InSampleMonteCarlo(),\n );\n\njulia> sort(unique([data[:noise_term] for sim in simulations for data in sim]))\n3-element Vector{@NamedTuple{inflow::Float64, fuel_multiplier::Float64}}:\n (inflow = 0.0, fuel_multiplier = 1.5)\n (inflow = 50.0, fuel_multiplier = 1.0)\n (inflow = 100.0, fuel_multiplier = 0.75)","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/#Out-of-sample-Monte-Carlo-simulation","page":"Simulate using a different sampling scheme","title":"Out-of-sample Monte Carlo simulation","text":"","category":"section"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"Instead of using the in-sample data, we can perform an out-of-sample simulation of the policy using the SDDP.OutOfSampleMonteCarlo sampling scheme.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"For each node, the SDDP.OutOfSampleMonteCarlo needs to define a new distribution for the transition probabilities between nodes in the policy graph, and a new distribution for the stagewise independent noise terms.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"note: Note\nThe support of the distribution for the stagewise independent noise terms does not have to be the same as the in-sample distributions.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> sampling_scheme = SDDP.OutOfSampleMonteCarlo(model) do node\n stage, markov_state = node\n if stage == 0\n # Called from the root node. Transition to (1, 1) with probability 1.0.\n # Only return the list of children, _not_ a list of noise terms.\n return [SDDP.Noise((1, 1), 1.0)]\n elseif stage == 3\n # Called from the final node. Return an empty list for the children,\n # and a single, deterministic realization for the noise terms.\n children = SDDP.Noise[]\n noise_terms = [SDDP.Noise((inflow = 75.0, fuel_multiplier = 1.2), 1.0)]\n return children, noise_terms\n else\n # Called from a normal node. Return the in-sample distribution for the\n # noise terms, but modify the transition probabilities so that the\n # Markov switching probability is now 50%.\n probability = markov_state == 1 ? [1/6, 1/3, 1/2] : [1/2, 1/3, 1/6]\n # Note: `Ω` is defined at the top of this page of documentation\n noise_terms = [SDDP.Noise(ω, p) for (ω, p) in zip(Ω, probability)]\n children = [\n SDDP.Noise((stage + 1, 1), 0.5), SDDP.Noise((stage + 1, 2), 0.5)\n ]\n return children, noise_terms\n end\n end;\n\njulia> simulations = SDDP.simulate(model, 1; sampling_scheme = sampling_scheme);\n\njulia> simulations[1][3][:noise_term]\n(inflow = 75.0, fuel_multiplier = 1.2)","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"Alternatively, if you only want to modify the stagewise independent noise terms, pass use_insample_transition = true.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> sampling_scheme = SDDP.OutOfSampleMonteCarlo(\n model;\n use_insample_transition = true\n ) do node\n stage, markov_state = node\n if stage == 3\n # Called from the final node. Return a single, deterministic\n # realization for the noise terms. Don't return the children because we\n # use the in-sample data.\n return [SDDP.Noise((inflow = 65.0, fuel_multiplier = 1.1), 1.0)]\n else\n # Called from a normal node. Return the in-sample distribution for the\n # noise terms. Don't return the children because we use the in-sample\n # data.\n probability = markov_state == 1 ? [1/6, 1/3, 1/2] : [1/2, 1/3, 1/6]\n # Note: `Ω` is defined at the top of this page of documentation\n return [SDDP.Noise(ω, p) for (ω, p) in zip(Ω, probability)]\n end\n end;\n\njulia> simulations = SDDP.simulate(model, 1; sampling_scheme = sampling_scheme);\n\njulia> simulations[1][3][:noise_term]\n(inflow = 65.0, fuel_multiplier = 1.1)","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/#Historical-simulation","page":"Simulate using a different sampling scheme","title":"Historical simulation","text":"","category":"section"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"Instead of performing a Monte Carlo simulation like the previous tutorials, we may want to simulate one particular sequence of noise realizations. This historical simulation can also be conducted by passing a SDDP.Historical sampling scheme to the sampling_scheme keyword of the SDDP.simulate function.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"We can confirm that the historical sequence of nodes was visited by querying the :node_index key of the simulation results.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> simulations = SDDP.simulate(\n model;\n sampling_scheme = SDDP.Historical(\n # Note: `Ω` is defined at the top of this page of documentation\n [((1, 1), Ω[1]), ((2, 2), Ω[3]), ((3, 1), Ω[2])],\n ),\n );\n\njulia> [stage[:node_index] for stage in simulations[1]]\n3-element Vector{Tuple{Int64, Int64}}:\n (1, 1)\n (2, 2)\n (3, 1)","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"You can also pass a vector of scenarios, which are sampled sequentially:","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> sampling_scheme = SDDP.Historical(\n [\n [\n ((1,1), (inflow = 65.0, fuel_multiplier = 1.1)),\n ((2,2), (inflow = 10.0, fuel_multiplier = 1.4)), # Can be out-of-sample\n ((3,1), (inflow = 65.0, fuel_multiplier = 1.1)),\n ],\n [\n ((1,1), (inflow = 65.0, fuel_multiplier = 1.1)),\n ((2,2), (inflow = 100.0, fuel_multiplier = 0.75)),\n ((3,1), (inflow = 0.0, fuel_multiplier = 1.5)),\n ],\n ],\n )\nA Historical sampler with 2 scenarios sampled sequentially.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"Or a vector of scenarios and a corresponding vector of probabilities so that the historical scenarios are sampled probabilistically:","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> sampling_scheme = SDDP.Historical(\n [\n [\n ((1,1), (inflow = 65.0, fuel_multiplier = 1.1)),\n ((2,2), (inflow = 10.0, fuel_multiplier = 1.4)), # Can be out-of-sample\n ((3,1), (inflow = 65.0, fuel_multiplier = 1.1)),\n ],\n [\n ((1,1), (inflow = 65.0, fuel_multiplier = 1.1)),\n ((2,2), (inflow = 100.0, fuel_multiplier = 0.75)),\n ((3,1), (inflow = 0.0, fuel_multiplier = 1.5)),\n ],\n ],\n [0.3, 0.7],\n )\nA Historical sampler with 2 scenarios sampled probabilistically.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"tip: Tip\nYour sample space doesn't have to be a NamedTuple. It an be any Julia type! Use a Vector if that is easier, or define your own struct.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"EditURL = \"first_steps.jl\"","category":"page"},{"location":"tutorial/first_steps/#An-introduction-to-SDDP.jl","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"SDDP.jl is a solver for multistage stochastic optimization problems. By multistage, we mean problems in which an agent makes a sequence of decisions over time. By stochastic, we mean that the agent is making decisions in the presence of uncertainty that is gradually revealed over the multiple stages.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"tip: Tip\nMultistage stochastic programming has a lot in common with fields like stochastic optimal control, approximate dynamic programming, Markov decision processes, and reinforcement learning. If it helps, you can think of SDDP as Q-learning in which we approximate the value function using linear programming duality.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"This tutorial is in two parts. First, it is an introduction to the background notation and theory we need, and second, it solves a simple multistage stochastic programming problem.","category":"page"},{"location":"tutorial/first_steps/#What-is-a-node?","page":"An introduction to SDDP.jl","title":"What is a node?","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"A common feature of multistage stochastic optimization problems is that they model an agent controlling a system over time. To simplify things initially, we're going to start by describing what happens at an instant in time at which the agent makes a decision. Only after this will we extend our problem to multiple stages and the notion of time.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"A node is a place at which the agent makes a decision.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"tip: Tip\nFor readers with a stochastic programming background, \"node\" is synonymous with \"stage\" in this section. However, for reasons that will become clear shortly, there can be more than one \"node\" per instant in time, which is why we prefer the term \"node\" over \"stage.\"","category":"page"},{"location":"tutorial/first_steps/#States,-controls,-and-random-variables","page":"An introduction to SDDP.jl","title":"States, controls, and random variables","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The system that we are modeling can be described by three types of variables.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"State variables track a property of the system over time.\nEach node has an associated incoming state variable (the value of the state at the start of the node), and an outgoing state variable (the value of the state at the end of the node).\nExamples of state variables include the volume of water in a reservoir, the number of units of inventory in a warehouse, or the spatial position of a moving vehicle.\nBecause state variables track the system over time, each node must have the same set of state variables.\nWe denote state variables by the letter x for the incoming state variable and x^prime for the outgoing state variable.\nControl variables are actions taken (implicitly or explicitly) by the agent within a node which modify the state variables.\nExamples of control variables include releases of water from the reservoir, sales or purchasing decisions, and acceleration or braking of the vehicle.\nControl variables are local to a node i, and they can differ between nodes. For example, some control variables may be available within certain nodes.\nWe denote control variables by the letter u.\nRandom variables are finite, discrete, exogenous random variables that the agent observes at the start of a node, before the control variables are decided.\nExamples of random variables include rainfall inflow into a reservoir, probabilistic perishing of inventory, and steering errors in a vehicle.\nRandom variables are local to a node i, and they can differ between nodes. For example, some nodes may have random variables, and some nodes may not.\nWe denote random variables by the Greek letter omega and the sample space from which they are drawn by Omega_i. The probability of sampling omega is denoted p_omega for simplicity.\nImportantly, the random variable associated with node i is independent of the random variables in all other nodes.","category":"page"},{"location":"tutorial/first_steps/#Dynamics","page":"An introduction to SDDP.jl","title":"Dynamics","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"In a node i, the three variables are related by a transition function, which maps the incoming state, the controls, and the random variables to the outgoing state as follows: x^prime = T_i(x u omega).","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"As a result of entering a node i with the incoming state x, observing random variable omega, and choosing control u, the agent incurs a cost C_i(x u omega). (If the agent is a maximizer, this can be a profit, or a negative cost.) We call C_i the stage objective.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"To choose their control variables in node i, the agent uses a decision rule u = pi_i(x omega), which is a function that maps the incoming state variable and observation of the random variable to a control u. This control must satisfy some feasibility requirements u in U_i(x omega).","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Here is a schematic which we can use to visualize a single node:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"(Image: Hazard-decision node)","category":"page"},{"location":"tutorial/first_steps/#Policy-graphs","page":"An introduction to SDDP.jl","title":"Policy graphs","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Now that we have a node, we need to connect multiple nodes together to form a multistage stochastic program. We call the graph created by connecting nodes together a policy graph.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The simplest type of policy graph is a linear policy graph. Here's a linear policy graph with three nodes:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"(Image: Linear policy graph)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Here we have dropped the notations inside each node and replaced them by a label (1, 2, and 3) to represent nodes i=1, i=2, and i=3.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"In addition to nodes 1, 2, and 3, there is also a root node (the circle), and three arcs. Each arc has an origin node and a destination node, like 1 => 2, and a corresponding probability of transitioning from the origin to the destination. Unless specified, we assume that the arc probabilities are uniform over the number of outgoing arcs. Thus, in this picture the arc probabilities are all 1.0.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"State variables flow long the arcs of the graph. Thus, the outgoing state variable x^prime from node 1 becomes the incoming state variable x to node 2, and so on.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"We denote the set of nodes by mathcalN, the root node by R, and the probability of transitioning from node i to node j by p_ij. (If no arc exists, then p_ij = 0.) We define the set of successors of node i as i^+ = j in mathcalN p_ij 0.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Each node in the graph corresponds to a place at which the agent makes a decision, and we call moments in time at which the agent makes a decision stages. By convention, we try to draw policy graphs from left-to-right, with the stages as columns. There can be more than one node in a stage! Here's an example of a structure we call Markovian policy graphs:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"(Image: Markovian policy graph)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Here each column represents a moment in time, the squiggly lines represent stochastic rainfall, and the rows represent the world in two discrete states: El Niño and La Niña. In the El Niño states, the distribution of the rainfall random variable is different to the distribution of the rainfall random variable in the La Niña states, and there is some switching probability between the two states that can be modelled by a Markov chain.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Moreover, policy graphs can have cycles! This allows them to model infinite horizon problems. Here's another example, taken from the paper Dowson (2020):","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"(Image: POWDer policy graph)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The columns represent time, and the rows represent different states of the world. In this case, the rows represent different prices that milk can be sold for at the end of each year. The squiggly lines denote a multivariate random variable that models the weekly amount of rainfall that occurs.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"note: Note\nThe sum of probabilities on the outgoing arcs of node i can be less than 1, i.e., sumlimits_jin i^+ p_ij le 1. What does this mean? One interpretation is that the probability is a discount factor. Another interpretation is that there is an implicit \"zero\" node that we have not modeled, with p_i0 = 1 - sumlimits_jin i^+ p_ij. This zero node has C_0(x u omega) = 0, and 0^+ = varnothing.","category":"page"},{"location":"tutorial/first_steps/#More-notation","page":"An introduction to SDDP.jl","title":"More notation","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Recall that each node i has a decision rule u = pi_i(x omega), which is a function that maps the incoming state variable and observation of the random variable to a control u.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The set of decision rules, with one element for each node in the policy graph, is called a policy.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The goal of the agent is to find a policy that minimizes the expected cost of starting at the root node with some initial condition x_R, and proceeding from node to node along the probabilistic arcs until they reach a node with no outgoing arcs (or it reaches an implicit \"zero\" node).","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"min_pi mathbbE_i in R^+ omega in Omega_iV_i^pi(x_R omega)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"where","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"V_i^pi(x omega) = C_i(x u omega) + mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"where u = pi_i(x omega) in U_i(x omega), and x^prime = T_i(x u omega).","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The expectations are a bit complicated, but they are equivalent to:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi) = sumlimits_j in i^+ p_ij sumlimits_varphi in Omega_j p_varphiV_j(x^prime varphi)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"An optimal policy is the set of decision rules that the agent can use to make decisions and achieve the smallest expected cost.","category":"page"},{"location":"tutorial/first_steps/#Assumptions","page":"An introduction to SDDP.jl","title":"Assumptions","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"warning: Warning\nThis section is important!","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The space of problems you can model with this framework is very large. Too large, in fact, for us to form tractable solution algorithms for! Stochastic dual dynamic programming requires the following assumptions in order to work:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Assumption 1: finite nodes","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"There is a finite number of nodes in mathcalN.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Assumption 2: finite random variables","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The sample space Omega_i is finite and discrete for each node iinmathcalN.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Assumption 3: convex problems","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Given fixed omega, C_i(x u omega) is a convex function, T_i(x u omega) is linear, and U_i(x u omega) is a non-empty, bounded convex set with respect to x and u.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Assumption 4: no infinite loops","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"For all loops in the policy graph, the product of the arc transition probabilities around the loop is strictly less than 1.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Assumption 5: relatively complete recourse","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"This is a technical but important assumption. See Relatively complete recourse for more details.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"note: Note\nSDDP.jl relaxes assumption (3) to allow for integer state and control variables, but we won't go into the details here. Assumption (4) essentially means that we obtain a discounted-cost solution for infinite-horizon problems, instead of an average-cost solution; see Dowson (2020) for details.","category":"page"},{"location":"tutorial/first_steps/#Dynamic-programming-and-subproblems","page":"An introduction to SDDP.jl","title":"Dynamic programming and subproblems","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Now that we have formulated our problem, we need some ways of computing optimal decision rules. One way is to just use a heuristic like \"choose a control randomly from the set of feasible controls.\" However, such a policy is unlikely to be optimal.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"A better way of obtaining an optimal policy is to use Bellman's principle of optimality, a.k.a Dynamic Programming, and define a recursive subproblem as follows:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"beginaligned\nV_i(x omega) = minlimits_barx x^prime u C_i(barx u omega) + mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi)\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x\nendaligned","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Our decision rule, pi_i(x omega), solves this optimization problem and returns a u^* corresponding to an optimal solution.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"note: Note\nWe add barx as a decision variable, along with the fishing constraint barx = x for two reasons: it makes it obvious that formulating a problem with x times u results in a bilinear program instead of a linear program (see Assumption 3), and it simplifies the implementation of the SDDP algorithm.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"These subproblems are very difficult to solve exactly, because they involve recursive optimization problems with lots of nested expectations.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Therefore, instead of solving them exactly, SDDP.jl works by iteratively approximating the expectation term of each subproblem, which is also called the cost-to-go term. For now, you don't need to understand the details, other than that there is a nasty cost-to-go term that we deal with behind-the-scenes.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The subproblem view of a multistage stochastic program is also important, because it provides a convenient way of communicating the different parts of the broader problem, and it is how we will communicate the problem to SDDP.jl. All we need to do is drop the cost-to-go term and fishing constraint, and define a new subproblem SP as:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"beginaligned\ntextttSP_i(x omega) minlimits_barx x^prime u C_i(barx u omega) \n x^prime = T_i(barx u omega) \n u in U_i(barx omega)\nendaligned","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"note: Note\nWhen we talk about formulating a subproblem with SDDP.jl, this is the formulation we mean.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"We've retained the transition function and uncertainty set because they help to motivate the different components of the subproblem. However, in general, the subproblem can be more general. A better (less restrictive) representation might be:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"beginaligned\ntextttSP_i(x omega) minlimits_barx x^prime u C_i(barx x^prime u omega) \n (barx x^prime u) in mathcalX_i(omega)\nendaligned","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Note that the outgoing state variable can appear in the objective, and we can add constraints involving the incoming and outgoing state variables. It should be obvious how to map between the two representations.","category":"page"},{"location":"tutorial/first_steps/#Example:-hydro-thermal-scheduling","page":"An introduction to SDDP.jl","title":"Example: hydro-thermal scheduling","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Hydrothermal scheduling is the most common application of stochastic dual dynamic programming. To illustrate some of the basic functionality of SDDP.jl, we implement a very simple model of the hydrothermal scheduling problem.","category":"page"},{"location":"tutorial/first_steps/#Problem-statement","page":"An introduction to SDDP.jl","title":"Problem statement","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"We consider the problem of scheduling electrical generation over three weeks in order to meet a known demand of 150 MWh in each week.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"There are two generators: a thermal generator, and a hydro generator. In each week, the agent needs to decide how much energy to generate from thermal, and how much energy to generate from hydro.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The thermal generator has a short-run marginal cost of $50/MWh in the first stage, $100/MWh in the second stage, and $150/MWh in the third stage.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The hydro generator has a short-run marginal cost of $0/MWh.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The hydro generator draws water from a reservoir which has a maximum capacity of 200 MWh. (Although water is usually measured in m³, we measure it in the energy-equivalent MWh to simplify things. In practice, there is a conversion function between m³ flowing throw the turbine and MWh.) At the start of the first time period, the reservoir is full.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"In addition to the ability to generate electricity by passing water through the hydroelectric turbine, the hydro generator can also spill water down a spillway (bypassing the turbine) in order to prevent the water from over-topping the dam. We assume that there is no cost of spillage.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"In addition to water leaving the reservoir, water that flows into the reservoir through rainfall or rivers are referred to as inflows. These inflows are uncertain, and are the cause of the main trade-off in hydro-thermal scheduling: the desire to use water now to generate cheap electricity, against the risk that future inflows will be low, leading to blackouts or expensive thermal generation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"For our simple model, we assume that the inflows can be modelled by a discrete distribution with the three outcomes given in the following table:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"ω 0 50 100\nP(ω) 1/3 1/3 1/3","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The value of the noise (the random variable) is observed by the agent at the start of each stage. This makes the problem a wait-and-see or hazard-decision formulation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The goal of the agent is to minimize the expected cost of generation over the three weeks.","category":"page"},{"location":"tutorial/first_steps/#Formulating-the-problem","page":"An introduction to SDDP.jl","title":"Formulating the problem","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Before going further, we need to load SDDP.jl:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"using SDDP","category":"page"},{"location":"tutorial/first_steps/#Graph-structure","page":"An introduction to SDDP.jl","title":"Graph structure","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"First, we need to identify the structure of the policy graph. From the problem statement, we want to model the problem over three weeks in weekly stages. Therefore, the policy graph is a linear graph with three stages:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"graph = SDDP.LinearGraph(3)","category":"page"},{"location":"tutorial/first_steps/#Building-the-subproblem","page":"An introduction to SDDP.jl","title":"Building the subproblem","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Next, we need to construct the associated subproblem for each node in graph. To do so, we need to provide SDDP.jl a function which takes two arguments. The first is subproblem::Model, which is an empty JuMP model. The second is node, which is the name of each node in the policy graph. If the graph is linear, SDDP defaults to naming the nodes using the integers in 1:T. Here's an example that we are going to flesh out over the next few paragraphs:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # ... stuff to go here ...\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"warning: Warning\nIf you use a different type of graph, node may be a type different to Int. For example, in SDDP.MarkovianGraph, node is a Tuple{Int,Int}.","category":"page"},{"location":"tutorial/first_steps/#State-variables","page":"An introduction to SDDP.jl","title":"State variables","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The first part of the subproblem we need to identify are the state variables. Since we only have one reservoir, there is only one state variable, volume, the volume of water in the reservoir [MWh].","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The volume had bounds of [0, 200], and the reservoir was full at the start of time, so x_R = 200.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"We add state variables to our subproblem using JuMP's @variable macro. However, in addition to the usual syntax, we also pass SDDP.State, and we need to provide the initial value (x_R) using the initial_value keyword.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The syntax for adding a state variable is a little obtuse, because volume is not single JuMP variable. Instead, volume is a struct with two fields, .in and .out, corresponding to the incoming and outgoing state variables respectively.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"note: Note\nWe don't need to add the fishing constraint barx = x; SDDP.jl does this automatically.","category":"page"},{"location":"tutorial/first_steps/#Control-variables","page":"An introduction to SDDP.jl","title":"Control variables","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The next part of the subproblem we need to identify are the control variables. The control variables for our problem are:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"thermal_generation: the quantity of energy generated from thermal [MWh/week]\nhydro_generation: the quantity of energy generated from hydro [MWh/week]\nhydro_spill: the volume of water spilled from the reservoir in each week [MWh/week]","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Each of these variables is non-negative.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"We add control variables to our subproblem as normal JuMP variables, using @variable or @variables:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Control variables\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n end)\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"tip: Tip\nModeling is an art, and a tricky part of that art is figuring out which variables are state variables, and which are control variables. A good rule is: if you need a value of a control variable in some future node to make a decision, it is a state variable instead.","category":"page"},{"location":"tutorial/first_steps/#Random-variables","page":"An introduction to SDDP.jl","title":"Random variables","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The next step is to identify any random variables. In our example, we had","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"inflow: the quantity of water that flows into the reservoir each week [MWh/week]","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"To add an uncertain variable to the model, we create a new JuMP variable inflow, and then call the function SDDP.parameterize. The SDDP.parameterize function takes three arguments: the subproblem, a vector of realizations, and a corresponding vector of probabilities.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Control variables\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n end)\n # Random variables\n @variable(subproblem, inflow)\n Ω = [0.0, 50.0, 100.0]\n P = [1 / 3, 1 / 3, 1 / 3]\n SDDP.parameterize(subproblem, Ω, P) do ω\n return JuMP.fix(inflow, ω)\n end\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Note how we use the JuMP function JuMP.fix to set the value of the inflow variable to ω.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"warning: Warning\nSDDP.parameterize can only be called once in each subproblem definition! If your random variable is multi-variate, read Add multi-dimensional noise terms.","category":"page"},{"location":"tutorial/first_steps/#Transition-function-and-constraints","page":"An introduction to SDDP.jl","title":"Transition function and constraints","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Now that we've identified our variables, we can define the transition function and the constraints.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"For our problem, the state variable is the volume of water in the reservoir. The volume of water decreases in response to water being used for hydro generation and spillage. So the transition function is: volume.out = volume.in - hydro_generation - hydro_spill + inflow. (Note how we use volume.in and volume.out to refer to the incoming and outgoing state variables.)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"There is also a constraint that the total generation must sum to 150 MWh.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Both the transition function and any additional constraint are added using JuMP's @constraint and @constraints macro.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Control variables\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n end)\n # Random variables\n @variable(subproblem, inflow)\n Ω = [0.0, 50.0, 100.0]\n P = [1 / 3, 1 / 3, 1 / 3]\n SDDP.parameterize(subproblem, Ω, P) do ω\n return JuMP.fix(inflow, ω)\n end\n # Transition function and constraints\n @constraints(\n subproblem,\n begin\n volume.out == volume.in - hydro_generation - hydro_spill + inflow\n demand_constraint, hydro_generation + thermal_generation == 150\n end\n )\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/#Objective-function","page":"An introduction to SDDP.jl","title":"Objective function","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Finally, we need to add an objective function using @stageobjective. The objective of the agent is to minimize the cost of thermal generation. This is complicated by a fuel cost that depends on the node.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"One possibility is to use an if statement on node to define the correct objective:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Control variables\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n end)\n # Random variables\n @variable(subproblem, inflow)\n Ω = [0.0, 50.0, 100.0]\n P = [1 / 3, 1 / 3, 1 / 3]\n SDDP.parameterize(subproblem, Ω, P) do ω\n return JuMP.fix(inflow, ω)\n end\n # Transition function and constraints\n @constraints(\n subproblem,\n begin\n volume.out == volume.in - hydro_generation - hydro_spill + inflow\n demand_constraint, hydro_generation + thermal_generation == 150\n end\n )\n # Stage-objective\n if node == 1\n @stageobjective(subproblem, 50 * thermal_generation)\n elseif node == 2\n @stageobjective(subproblem, 100 * thermal_generation)\n else\n @assert node == 3\n @stageobjective(subproblem, 150 * thermal_generation)\n end\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"A second possibility is to use an array of fuel costs, and use node to index the correct value:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Control variables\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n end)\n # Random variables\n @variable(subproblem, inflow)\n Ω = [0.0, 50.0, 100.0]\n P = [1 / 3, 1 / 3, 1 / 3]\n SDDP.parameterize(subproblem, Ω, P) do ω\n return JuMP.fix(inflow, ω)\n end\n # Transition function and constraints\n @constraints(\n subproblem,\n begin\n volume.out == volume.in - hydro_generation - hydro_spill + inflow\n demand_constraint, hydro_generation + thermal_generation == 150\n end\n )\n # Stage-objective\n fuel_cost = [50, 100, 150]\n @stageobjective(subproblem, fuel_cost[node] * thermal_generation)\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/#Constructing-the-model","page":"An introduction to SDDP.jl","title":"Constructing the model","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Now that we've written our subproblem, we need to construct the full model. For that, we're going to need a linear solver. Let's choose HiGHS:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"using HiGHS","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"warning: Warning\nIn larger problems, you should use a more robust commercial LP solver like Gurobi. Read Words of warning for more details.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Then, we can create a full model using SDDP.PolicyGraph, passing our subproblem_builder function as the first argument, and our graph as the second:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"model = SDDP.PolicyGraph(\n subproblem_builder,\n graph;\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"sense: the optimization sense. Must be :Min or :Max.\nlower_bound: you must supply a valid bound on the objective. For our problem, we know that we cannot incur a negative cost so $0 is a valid lower bound.\noptimizer: This is borrowed directly from JuMP's Model constructor: Model(HiGHS.Optimizer)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Because linear policy graphs are the most commonly used structure, we can use SDDP.LinearPolicyGraph instead of passing SDDP.LinearGraph(3) to SDDP.PolicyGraph.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"model = SDDP.LinearPolicyGraph(\n subproblem_builder;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"There is also the option is to use Julia's do syntax to avoid needing to define a subproblem_builder function separately:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"model = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, node\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Control variables\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n end)\n # Random variables\n @variable(subproblem, inflow)\n Ω = [0.0, 50.0, 100.0]\n P = [1 / 3, 1 / 3, 1 / 3]\n SDDP.parameterize(subproblem, Ω, P) do ω\n return JuMP.fix(inflow, ω)\n end\n # Transition function and constraints\n @constraints(\n subproblem,\n begin\n volume.out == volume.in - hydro_generation - hydro_spill + inflow\n demand_constraint, hydro_generation + thermal_generation == 150\n end\n )\n # Stage-objective\n if node == 1\n @stageobjective(subproblem, 50 * thermal_generation)\n elseif node == 2\n @stageobjective(subproblem, 100 * thermal_generation)\n else\n @assert node == 3\n @stageobjective(subproblem, 150 * thermal_generation)\n end\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"info: Info\nJulia's do syntax is just a different way of passing an anonymous function inner to some function outer which takes inner as the first argument. For example, given:outer(inner::Function, x, y) = inner(x, y)thenouter(1, 2) do x, y\n return x^2 + y^2\nendis equivalent to:outer((x, y) -> x^2 + y^2, 1, 2)For our purpose, inner is subproblem_builder, and outer is SDDP.PolicyGraph.","category":"page"},{"location":"tutorial/first_steps/#Training-a-policy","page":"An introduction to SDDP.jl","title":"Training a policy","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Now we have a model, which is a description of the policy graph, we need to train a policy. Models can be trained using the SDDP.train function. It accepts a number of keyword arguments. iteration_limit terminates the training after the provided number of iterations.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"SDDP.train(model; iteration_limit = 10)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"There's a lot going on in this printout! Let's break it down.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The first section, \"problem,\" gives some problem statistics. In this example there are 3 nodes, 1 state variable, and 27 scenarios (3^3). We haven't solved this problem before so there are no existing cuts.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The \"options\" section lists some options we are using to solve the problem. For more information on the numerical stability report, read the Numerical stability report section.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The \"subproblem structure\" section also needs explaining. This looks at all of the nodes in the policy graph and reports the minimum and maximum number of variables and each constraint type in the corresponding subproblem. In this case each subproblem has 7 variables and various numbers of different constraint types. Note that the exact numbers may not correspond to the formulation as you wrote it, because SDDP.jl adds some extra variables for the cost-to-go function.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Then comes the iteration log, which is the main part of the printout. It has the following columns:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"iteration: the SDDP iteration\nsimulation: the cost of the single forward pass simulation for that iteration. This value is stochastic and is not guaranteed to improve over time. However, it's useful to check that the units are reasonable, and that it is not deterministic if you intended for the problem to be stochastic, etc.\nbound: this is a lower bound (upper if maximizing) for the value of the optimal policy. This bound should be monotonically improving (increasing if minimizing, decreasing if maximizing), but in some cases it can temporarily worsen due to cut selection, especially in the early iterations of the algorithm.\ntime (s): the total number of seconds spent solving so far\nsolves: the total number of subproblem solves to date. This can be very large!\npid: the ID of the processor used to solve that iteration. This should be 1 unless you are using parallel computation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"In addition, if the first character of a line is †, then SDDP.jl experienced numerical issues during the solve, but successfully recovered.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The printout finishes with some summary statistics:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"status: why did the solver stop?\ntotal time (s), best bound, and total solves are the values from the last iteration of the solve.\nsimulation ci: a confidence interval that estimates the quality of the policy from the Simulation column.\nnumeric issues: the number of iterations that experienced numerical issues.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"warning: Warning\nThe simulation ci result can be misleading if you run a small number of iterations, or if the initial simulations are very bad. On a more technical note, it is an in-sample simulation, which may not reflect the true performance of the policy. See Obtaining bounds for more details.","category":"page"},{"location":"tutorial/first_steps/#Obtaining-the-decision-rule","page":"An introduction to SDDP.jl","title":"Obtaining the decision rule","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"After training a policy, we can create a decision rule using SDDP.DecisionRule:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"rule = SDDP.DecisionRule(model; node = 1)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Then, to evaluate the decision rule, we use SDDP.evaluate:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"solution = SDDP.evaluate(\n rule;\n incoming_state = Dict(:volume => 150.0),\n noise = 50.0,\n controls_to_record = [:hydro_generation, :thermal_generation],\n)","category":"page"},{"location":"tutorial/first_steps/#Simulating-the-policy","page":"An introduction to SDDP.jl","title":"Simulating the policy","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Once you have a trained policy, you can also simulate it using SDDP.simulate. The return value from simulate is a vector with one element for each replication. Each element is itself a vector, with one element for each stage. Each element, corresponding to a particular stage in a particular replication, is a dictionary that records information from the simulation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"simulations = SDDP.simulate(\n # The trained model to simulate.\n model,\n # The number of replications.\n 100,\n # A list of names to record the values of.\n [:volume, :thermal_generation, :hydro_generation, :hydro_spill],\n)\n\nreplication = 1\nstage = 2\nsimulations[replication][stage]","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Ignore many of the entries for now; they will be relevant later.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"One element of interest is :volume.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"outgoing_volume = map(simulations[1]) do node\n return node[:volume].out\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Another is :thermal_generation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"thermal_generation = map(simulations[1]) do node\n return node[:thermal_generation]\nend","category":"page"},{"location":"tutorial/first_steps/#Obtaining-bounds","page":"An introduction to SDDP.jl","title":"Obtaining bounds","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Because the optimal policy is stochastic, one common approach to quantify the quality of the policy is to construct a confidence interval for the expected cost by summing the stage objectives along each simulation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"objectives = map(simulations) do simulation\n return sum(stage[:stage_objective] for stage in simulation)\nend\n\nμ, ci = SDDP.confidence_interval(objectives)\nprintln(\"Confidence interval: \", μ, \" ± \", ci)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"This confidence interval is an estimate for an upper bound of the policy's quality. We can calculate the lower bound using SDDP.calculate_bound.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"println(\"Lower bound: \", SDDP.calculate_bound(model))","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"tip: Tip\nThe upper- and lower-bounds are reversed if maximizing, i.e., SDDP.calculate_bound. returns an upper bound.","category":"page"},{"location":"tutorial/first_steps/#Custom-recorders","page":"An introduction to SDDP.jl","title":"Custom recorders","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"In addition to simulating the primal values of variables, we can also pass custom recorder functions. Each of these functions takes one argument, the JuMP subproblem corresponding to each node. This function gets called after we have solved each node as we traverse the policy graph in the simulation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"For example, the dual of the demand constraint (which we named demand_constraint) corresponds to the price we should charge for electricity, since it represents the cost of each additional unit of demand. To calculate this, we can go:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"simulations = SDDP.simulate(\n model,\n 1; ## Perform a single simulation\n custom_recorders = Dict{Symbol,Function}(\n :price => (sp::JuMP.Model) -> JuMP.dual(sp[:demand_constraint]),\n ),\n)\n\nprices = map(simulations[1]) do node\n return node[:price]\nend","category":"page"},{"location":"tutorial/first_steps/#Extracting-the-marginal-water-values","page":"An introduction to SDDP.jl","title":"Extracting the marginal water values","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Finally, we can use SDDP.ValueFunction and SDDP.evaluate to obtain and evaluate the value function at different points in the state-space.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"note: Note\nBy \"value function\" we mean mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi), not the function V_i(x omega).","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"First, we construct a value function from the first subproblem:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"V = SDDP.ValueFunction(model; node = 1)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Then we can evaluate V at a point:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"cost, price = SDDP.evaluate(V, Dict(\"volume\" => 10))","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"This returns the cost-to-go (cost), and the gradient of the cost-to-go function with respect to each state variable. Note that since we are minimizing, the price has a negative sign: each additional unit of water leads to a decrease in the expected long-run cost.","category":"page"},{"location":"examples/StochDynamicProgramming.jl_stock/","page":"StochDynamicProgramming: the stock problem","title":"StochDynamicProgramming: the stock problem","text":"EditURL = \"StochDynamicProgramming.jl_stock.jl\"","category":"page"},{"location":"examples/StochDynamicProgramming.jl_stock/#StochDynamicProgramming:-the-stock-problem","page":"StochDynamicProgramming: the stock problem","title":"StochDynamicProgramming: the stock problem","text":"","category":"section"},{"location":"examples/StochDynamicProgramming.jl_stock/","page":"StochDynamicProgramming: the stock problem","title":"StochDynamicProgramming: the stock problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/StochDynamicProgramming.jl_stock/","page":"StochDynamicProgramming: the stock problem","title":"StochDynamicProgramming: the stock problem","text":"This example comes from StochDynamicProgramming.jl.","category":"page"},{"location":"examples/StochDynamicProgramming.jl_stock/","page":"StochDynamicProgramming: the stock problem","title":"StochDynamicProgramming: the stock problem","text":"using SDDP, HiGHS, Test\n\nfunction stock_example()\n model = SDDP.PolicyGraph(\n SDDP.LinearGraph(5);\n lower_bound = -2,\n optimizer = HiGHS.Optimizer,\n ) do sp, stage\n @variable(sp, 0 <= state <= 1, SDDP.State, initial_value = 0.5)\n @variable(sp, 0 <= control <= 0.5)\n @variable(sp, ξ)\n @constraint(sp, state.out == state.in - control + ξ)\n SDDP.parameterize(sp, 0.0:1/30:0.3) do ω\n return JuMP.fix(ξ, ω)\n end\n @stageobjective(sp, (sin(3 * stage) - 1) * control)\n end\n SDDP.train(model; log_frequency = 10)\n @test SDDP.calculate_bound(model) ≈ -1.471 atol = 0.001\n simulation_results = SDDP.simulate(model, 1_000)\n @test length(simulation_results) == 1_000\n μ = SDDP.Statistics.mean(\n sum(data[:stage_objective] for data in simulation) for\n simulation in simulation_results\n )\n @test μ ≈ -1.471 atol = 0.05\n return\nend\n\nstock_example()","category":"page"},{"location":"examples/agriculture_mccardle_farm/","page":"The farm planning problem","title":"The farm planning problem","text":"EditURL = \"agriculture_mccardle_farm.jl\"","category":"page"},{"location":"examples/agriculture_mccardle_farm/#The-farm-planning-problem","page":"The farm planning problem","title":"The farm planning problem","text":"","category":"section"},{"location":"examples/agriculture_mccardle_farm/","page":"The farm planning problem","title":"The farm planning problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/agriculture_mccardle_farm/","page":"The farm planning problem","title":"The farm planning problem","text":"There are four stages. The first stage is a deterministic planning stage. The next three are wait-and-see operational stages. The uncertainty in the three operational stages is a Markov chain for weather. There are three Markov states: dry, normal, and wet.","category":"page"},{"location":"examples/agriculture_mccardle_farm/","page":"The farm planning problem","title":"The farm planning problem","text":"Inspired by R. McCardle, Farm management optimization. Masters thesis, University of Louisville, Louisville, Kentucky, United States of America (2009).","category":"page"},{"location":"examples/agriculture_mccardle_farm/","page":"The farm planning problem","title":"The farm planning problem","text":"All data, including short variable names, is taken from that thesis.","category":"page"},{"location":"examples/agriculture_mccardle_farm/","page":"The farm planning problem","title":"The farm planning problem","text":"using SDDP, HiGHS, Test\n\nfunction test_mccardle_farm_model()\n S = [ # cutting, stage\n 0 1 2\n 0 0 1\n 0 0 0\n ]\n t = [60, 60, 245] # days in period\n D = [210, 210, 858] # demand\n q = [ # selling price per bale\n [4.5 4.5 4.5; 4.5 4.5 4.5; 4.5 4.5 4.5],\n [5.5 5.5 5.5; 5.5 5.5 5.5; 5.5 5.5 5.5],\n [6.5 6.5 6.5; 6.5 6.5 6.5; 6.5 6.5 6.5],\n ]\n b = [ # predicted yield (bales/acres) from cutting i in weather j.\n 30 75 37.5\n 15 37.5 18.25\n 7.5 18.75 9.325\n ]\n w = 3000 # max storage\n C = [50 50 50; 50 50 50; 50 50 50] # cost to grow hay\n r = [ # Cost per bale of hay from cutting i during weather condition j.\n [5 5 5; 5 5 5; 5 5 5],\n [6 6 6; 6 6 6; 6 6 6],\n [7 7 7; 7 7 7; 7 7 7],\n ]\n M = 60.0 # max acreage for planting\n H = 0.0 # initial inventory\n V = [0.05, 0.05, 0.05] # inventory cost\n L = 3000.0 # max demand for hay\n\n graph = SDDP.MarkovianGraph([\n ones(Float64, 1, 1),\n [0.14 0.69 0.17],\n [0.14 0.69 0.17; 0.14 0.69 0.17; 0.14 0.69 0.17],\n [0.14 0.69 0.17; 0.14 0.69 0.17; 0.14 0.69 0.17],\n ])\n\n model = SDDP.PolicyGraph(\n graph;\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, index\n stage, weather = index\n # ===================== State Variables =====================\n # Area planted.\n @variable(subproblem, 0 <= acres <= M, SDDP.State, initial_value = M)\n @variable(\n subproblem,\n bales[i = 1:3] >= 0,\n SDDP.State,\n initial_value = (i == 1 ? H : 0)\n )\n # ===================== Variables =====================\n @variables(subproblem, begin\n buy[1:3] >= 0 # Quantity of bales to buy from each cutting.\n sell[1:3] >= 0 # Quantity of bales to sell from each cutting.\n eat[1:3] >= 0 # Quantity of bales to eat from each cutting.\n pen_p[1:3] >= 0 # Penalties\n pen_n[1:3] >= 0 # Penalties\n end)\n # ===================== Constraints =====================\n if stage == 1\n @constraint(subproblem, acres.out <= acres.in)\n @constraint(subproblem, [i = 1:3], bales[i].in == bales[i].out)\n else\n @expression(\n subproblem,\n cut_ex[c = 1:3],\n bales[c].in + buy[c] - eat[c] - sell[c] + pen_p[c] - pen_n[c]\n )\n @constraints(\n subproblem,\n begin\n # Cannot plant more land than previously cropped.\n acres.out <= acres.in\n # In each stage we need to meet demand.\n sum(eat) >= D[stage-1]\n # We can buy and sell other cuttings.\n bales[stage-1].out ==\n cut_ex[stage-1] + acres.in * b[stage-1, weather]\n [c = 1:3; c != stage - 1], bales[c].out == cut_ex[c]\n # There is some maximum storage.\n sum(bales[i].out for i in 1:3) <= w\n # We can only sell what is in storage.\n [c = 1:3], sell[c] <= bales[c].in\n # Maximum sales quantity.\n sum(sell) <= L\n end\n )\n end\n # ===================== Stage objective =====================\n if stage == 1\n @stageobjective(subproblem, 0.0)\n else\n @stageobjective(\n subproblem,\n 1000 * (sum(pen_p) + sum(pen_n)) +\n # cost of growing\n C[stage-1, weather] * acres.in +\n sum(\n # inventory cost\n V[stage-1] * bales[cutting].in * t[stage-1] +\n # purchase cost\n r[cutting][stage-1, weather] * buy[cutting] +\n # feed cost\n S[cutting, stage-1] * eat[cutting] -\n # sell reward\n q[cutting][stage-1, weather] * sell[cutting] for\n cutting in 1:3\n )\n )\n end\n return\n end\n SDDP.train(model)\n @test SDDP.termination_status(model) == :simulation_stopping\n @test SDDP.calculate_bound(model) ≈ 4074.1391 atol = 1e-5\nend\n\ntest_mccardle_farm_model()","category":"page"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"EditURL = \"vehicle_location.jl\"","category":"page"},{"location":"examples/vehicle_location/#Vehicle-location","page":"Vehicle location","title":"Vehicle location","text":"","category":"section"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"This problem is a version of the Ambulance dispatch problem. A hospital is located at 0 on the number line that stretches from 0 to 100. Ambulance bases are located at points 20, 40, 60, 80, and 100. When not responding to a call, Ambulances must be located at a base, or the hospital. In this example there are three ambulances.","category":"page"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"Example location:","category":"page"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"H B B B B B\n0 ---- 20 ---- 40 ---- 60 ---- 80 ---- 100","category":"page"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"Each stage, a call comes in from somewhere on the number line. The agent must decide which ambulance to dispatch. They pay the cost of twice the driving distance. If an ambulance is not dispatched in a stage, the ambulance can be relocated to a different base in preparation for future calls. This incurs a cost of the driving distance.","category":"page"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"using SDDP\nimport HiGHS\nimport Test\n\nfunction vehicle_location_model(duality_handler)\n hospital_location = 0\n bases = vcat(hospital_location, [20, 40, 60, 80, 100])\n vehicles = [1, 2, 3]\n requests = 0:10:100\n shift_cost(src, dest) = abs(src - dest)\n function dispatch_cost(base, request)\n return 2 * (abs(request - hospital_location) + abs(request - base))\n end\n # Initial state of emergency vehicles at bases. All ambulances start at the\n # hospital.\n initial_state(b, v) = b == hospital_location ? 1.0 : 0.0\n model = SDDP.LinearPolicyGraph(;\n stages = 10,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n # Current location of each vehicle at each base.\n @variable(\n sp,\n 0 <= location[b = bases, v = vehicles] <= 1,\n SDDP.State,\n initial_value = initial_state(b, v)\n )\n @variables(sp, begin\n # Which vehicle is dispatched?\n 0 <= dispatch[bases, vehicles] <= 1, Bin\n # Shifting vehicles between bases: [src, dest, vehicle]\n 0 <= shift[bases, bases, vehicles] <= 1, Bin\n end)\n # Flow of vehicles in and out of bases:\n @expression(\n sp,\n base_balance[b in bases, v in vehicles],\n location[b, v].in - dispatch[b, v] - sum(shift[b, :, v]) +\n sum(shift[:, b, v])\n )\n @constraints(\n sp,\n begin\n # Only one vehicle dispatched to call.\n sum(dispatch) == 1\n # Can only dispatch vehicle from base if vehicle is at that base.\n [b in bases, v in vehicles],\n dispatch[b, v] <= location[b, v].in\n # Can only shift vehicle if vehicle is at that src base.\n [b in bases, v in vehicles],\n sum(shift[b, :, v]) <= location[b, v].in\n # Can only shift vehicle if vehicle is not being dispatched.\n [b in bases, v in vehicles],\n sum(shift[b, :, v]) + dispatch[b, v] <= 1\n # Can't shift to same base.\n [b in bases, v in vehicles], shift[b, b, v] == 0\n # Update states for non-home/non-hospital bases.\n [b in bases[2:end], v in vehicles],\n location[b, v].out == base_balance[b, v]\n # Update states for home/hospital bases.\n [v in vehicles],\n location[hospital_location, v].out ==\n base_balance[hospital_location, v] + sum(dispatch[:, v])\n end\n )\n SDDP.parameterize(sp, requests) do request\n @stageobjective(\n sp,\n sum(\n # Distance to travel from base to emergency and then to hospital.\n dispatch[b, v] * dispatch_cost(b, request) +\n # Distance travelled by vehicles relocating bases.\n sum(\n shift_cost(b, dest) * shift[b, dest, v] for\n dest in bases\n ) for b in bases, v in vehicles\n )\n )\n end\n end\n if get(ARGS, 1, \"\") == \"--write\"\n # Run `$ julia vehicle_location.jl --write` to update the benchmark\n # model directory\n model_dir = joinpath(@__DIR__, \"..\", \"..\", \"..\", \"benchmarks\", \"models\")\n SDDP.write_to_file(\n model,\n joinpath(model_dir, \"vehicle_location.sof.json.gz\");\n test_scenarios = 100,\n )\n exit(0)\n end\n SDDP.train(\n model;\n iteration_limit = 20,\n log_frequency = 10,\n cut_deletion_minimum = 100,\n duality_handler = duality_handler,\n )\n Test.@test SDDP.calculate_bound(model) >= 1000\n return\nend\n\n# TODO(odow): find out why this fails\n# vehicle_location_model(SDDP.ContinuousConicDuality())","category":"page"},{"location":"guides/improve_computational_performance/#Improve-computational-performance","page":"Improve computational performance","title":"Improve computational performance","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"SDDP is a computationally intensive algorithm. Here are some suggestions for how the computational performance can be improved.","category":"page"},{"location":"guides/improve_computational_performance/#Numerical-stability-(again)","page":"Improve computational performance","title":"Numerical stability (again)","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"We've already discussed this in the Numerical stability section of Words of warning. But, it's so important that we're going to say it again: improving the problem scaling is one of the best ways to improve the numerical performance of your models.","category":"page"},{"location":"guides/improve_computational_performance/#Solver-selection","page":"Improve computational performance","title":"Solver selection","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"The majority of the solution time is spent inside the low-level solvers. Choosing which solver (and the associated settings) correctly can lead to big speed-ups.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"Choose a commercial solver.\nOptions include CPLEX, Gurobi, and Xpress. Using free solvers such as CLP and HiGHS isn't a viable approach for large problems.\nTry different solvers.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"Even commercial solvers can have wildly different solution times. We've seen models on which CPLEX was 50% fast than Gurobi, and vice versa.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"Experiment with different solver options.\nUsing the default settings is usually a good option. However, sometimes it can pay to change these. In particular, forcing solvers to use the dual simplex algorithm (e.g., Method=1 in Gurobi ) is usually a performance win.","category":"page"},{"location":"guides/improve_computational_performance/#Single-cut-vs.-multi-cut","page":"Improve computational performance","title":"Single-cut vs. multi-cut","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"There are two competing ways that cuts can be created in SDDP: single-cut and multi-cut. By default, SDDP.jl uses the single-cut version of SDDP.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"The performance of each method is problem-dependent. We recommend that you try both in order to see which one performs better. In general, the single-cut method works better when the number of realizations of the stagewise-independent random variable is large, whereas the multi-cut method works better on small problems. However, the multi-cut method can cause numerical stability problems, particularly if used in conjunction with objective or belief state variables.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"You can switch between the methods by passing the relevant flag to cut_type in SDDP.train.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"SDDP.train(model; cut_type = SDDP.SINGLE_CUT)\nSDDP.train(model; cut_type = SDDP.MULTI_CUT)","category":"page"},{"location":"guides/improve_computational_performance/#Parallelism","page":"Improve computational performance","title":"Parallelism","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"SDDP.jl can take advantage of the parallel nature of modern computers to solve problems across multiple cores.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"info: Info\nWe highly recommend that you read the Julia manual's section on parallel computing.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"You can start Julia from a command line with N processors using the -p flag:","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"julia -p N","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"Alternatively, you can use the Distributed.jl package:","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"using Distributed\nDistributed.addprocs(N)","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"warning: Warning\nWorkers DON'T inherit their parent's Pkg environment. Therefore, if you started Julia with --project=/path/to/environment (or if you activated an environment from the REPL), you will need to put the following at the top of your script:using Distributed\n@everywhere begin\n import Pkg\n Pkg.activate(\"/path/to/environment\")\nend","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"Currently SDDP.jl supports to parallel schemes, SDDP.Serial and SDDP.Asynchronous. Instances of these parallel schemes should be passed to the parallel_scheme argument of SDDP.train and SDDP.simulate.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"using SDDP, HiGHS\nmodel = SDDP.LinearPolicyGraph(\n stages = 2, lower_bound = 0, optimizer = HiGHS.Optimizer\n) do sp, t\n @variable(sp, x >= 0, SDDP.State, initial_value = 1)\n @stageobjective(sp, x.out)\nend\nSDDP.train(model; iteration_limit = 10, parallel_scheme = SDDP.Asynchronous())\nSDDP.simulate(model, 10; parallel_scheme = SDDP.Asynchronous())","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"There is a large overhead for using the asynchronous solver. Even if you choose asynchronous mode, SDDP.jl will start in serial mode while the initialization takes place. Therefore, in the log you will see that the initial iterations take place on the master thread (Proc. ID = 1), and it is only after while that the solve switches to full parallelism.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"info: Info\nBecause of the large data communication requirements (all cuts have to be shared with all other cores), the solution time will not scale linearly with the number of cores.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"info: Info\nGiven the same number of iterations, the policy obtained from asynchronous mode will be worse than the policy obtained from serial mode. However, the asynchronous solver can take significantly less time to compute the same number of iterations.","category":"page"},{"location":"guides/improve_computational_performance/#Data-movement","page":"Improve computational performance","title":"Data movement","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"By default, data defined on the master process is not made available to the workers. Therefore, a model like the following:","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"data = 1\nmodel = SDDP.LinearPolicyGraph(stages = 2, lower_bound = 0) do sp, t\n @variable(sp, x >= 0, SDDP.State, initial_value = data)\n @stageobjective(sp, x.out)\nend","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"will result in an UndefVarError error like UndefVarError: data not defined.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"There are three solutions for this problem.","category":"page"},{"location":"guides/improve_computational_performance/#Option-1:-declare-data-inside-the-build-function","page":"Improve computational performance","title":"Option 1: declare data inside the build function","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"model = SDDP.LinearPolicyGraph(stages = 2) do sp, t\n data = 1\n @variable(sp, x >= 0, SDDP.State, initial_value = 1)\n @stageobjective(sp, x)\nend","category":"page"},{"location":"guides/improve_computational_performance/#Option-2:-use-@everywhere","page":"Improve computational performance","title":"Option 2: use @everywhere","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"@everywhere begin\n data = 1\nend\nmodel = SDDP.LinearPolicyGraph(stages = 2) do sp, t\n @variable(sp, x >= 0, SDDP.State, initial_value = 1)\n @stageobjective(sp, x)\nend","category":"page"},{"location":"guides/improve_computational_performance/#Option-3:-build-the-model-in-a-function","page":"Improve computational performance","title":"Option 3: build the model in a function","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"function build_model()\n data = 1\n return SDDP.LinearPolicyGraph(stages = 2) do sp, t\n @variable(sp, x >= 0, SDDP.State, initial_value = 1)\n @stageobjective(sp, x)\n end\nend\n\nmodel = build_model()","category":"page"},{"location":"guides/improve_computational_performance/#Initialization-hooks","page":"Improve computational performance","title":"Initialization hooks","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"warning: Warning\nThis is important if you use Gurobi!","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"SDDP.Asynchronous accepts a pre-processing hook that is run on each worker process before the model is solved. The most useful situation is for solvers than need an initialization step. A good example is Gurobi, which can share an environment amongst all models on a worker. Notably, this environment cannot be shared amongst workers, so defining one environment at the top of a script will fail!","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"To initialize a new environment on each worker, use the following:","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"SDDP.train(\n model;\n parallel_scheme = SDDP.Asynchronous() do m::SDDP.PolicyGraph\n env = Gurobi.Env()\n set_optimizer(m, () -> Gurobi.Optimizer(env))\n end,\n)","category":"page"},{"location":"examples/FAST_quickstart/","page":"FAST: the quickstart problem","title":"FAST: the quickstart problem","text":"EditURL = \"FAST_quickstart.jl\"","category":"page"},{"location":"examples/FAST_quickstart/#FAST:-the-quickstart-problem","page":"FAST: the quickstart problem","title":"FAST: the quickstart problem","text":"","category":"section"},{"location":"examples/FAST_quickstart/","page":"FAST: the quickstart problem","title":"FAST: the quickstart problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/FAST_quickstart/","page":"FAST: the quickstart problem","title":"FAST: the quickstart problem","text":"An implementation of the QuickStart example from FAST","category":"page"},{"location":"examples/FAST_quickstart/","page":"FAST: the quickstart problem","title":"FAST: the quickstart problem","text":"using SDDP, HiGHS, Test\n\nfunction fast_quickstart()\n model = SDDP.PolicyGraph(\n SDDP.LinearGraph(2);\n lower_bound = -5,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n @variable(sp, x >= 0, SDDP.State, initial_value = 0.0)\n if t == 1\n @stageobjective(sp, x.out)\n else\n @variable(sp, s >= 0)\n @constraint(sp, s <= x.in)\n SDDP.parameterize(sp, [2, 3]) do ω\n return JuMP.set_upper_bound(s, ω)\n end\n @stageobjective(sp, -2s)\n end\n end\n\n det = SDDP.deterministic_equivalent(model, HiGHS.Optimizer)\n set_silent(det)\n JuMP.optimize!(det)\n @test JuMP.objective_value(det) == -2\n\n SDDP.train(model; log_every_iteration = true)\n @test SDDP.calculate_bound(model) == -2\nend\n\nfast_quickstart()","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_3stages/","page":"StructDualDynProg: Problem 5.2, 3 stages","title":"StructDualDynProg: Problem 5.2, 3 stages","text":"EditURL = \"StructDualDynProg.jl_prob5.2_3stages.jl\"","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_3stages/#StructDualDynProg:-Problem-5.2,-3-stages","page":"StructDualDynProg: Problem 5.2, 3 stages","title":"StructDualDynProg: Problem 5.2, 3 stages","text":"","category":"section"},{"location":"examples/StructDualDynProg.jl_prob5.2_3stages/","page":"StructDualDynProg: Problem 5.2, 3 stages","title":"StructDualDynProg: Problem 5.2, 3 stages","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_3stages/","page":"StructDualDynProg: Problem 5.2, 3 stages","title":"StructDualDynProg: Problem 5.2, 3 stages","text":"This example comes from StochasticDualDynamicProgramming.jl.","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_3stages/","page":"StructDualDynProg: Problem 5.2, 3 stages","title":"StructDualDynProg: Problem 5.2, 3 stages","text":"using SDDP, HiGHS, Test\n\nfunction test_prob52_3stages()\n model = SDDP.LinearPolicyGraph(;\n stages = 3,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n n = 4\n m = 3\n i_c = [16, 5, 32, 2]\n C = [25, 80, 6.5, 160]\n T = [8760, 7000, 1500] / 8760\n D2 = [diff([0, 3919, 7329, 10315]) diff([0, 7086, 9004, 11169])]\n p2 = [0.9, 0.1]\n @variable(sp, x[i = 1:n] >= 0, SDDP.State, initial_value = 0.0)\n @variables(sp, begin\n y[1:n, 1:m] >= 0\n v[1:n] >= 0\n penalty >= 0\n ξ[j = 1:m]\n end)\n @constraints(sp, begin\n [i = 1:n], x[i].out == x[i].in + v[i]\n [i = 1:n], sum(y[i, :]) <= x[i].in\n [j = 1:m], sum(y[:, j]) + penalty >= ξ[j]\n end)\n @stageobjective(sp, i_c'v + C' * y * T + 1e5 * penalty)\n if t != 1 # no uncertainty in first stage\n SDDP.parameterize(sp, 1:size(D2, 2), p2) do ω\n for j in 1:m\n JuMP.fix(ξ[j], D2[j, ω])\n end\n end\n end\n if t == 3\n @constraint(sp, sum(v) == 0)\n end\n end\n\n det = SDDP.deterministic_equivalent(model, HiGHS.Optimizer)\n set_silent(det)\n JuMP.optimize!(det)\n @test JuMP.objective_value(det) ≈ 406712.49 atol = 0.1\n\n SDDP.train(model; log_frequency = 10)\n @test SDDP.calculate_bound(model) ≈ 406712.49 atol = 0.1\n return\nend\n\ntest_prob52_3stages()","category":"page"},{"location":"examples/infinite_horizon_hydro_thermal/","page":"Infinite horizon hydro-thermal","title":"Infinite horizon hydro-thermal","text":"EditURL = \"infinite_horizon_hydro_thermal.jl\"","category":"page"},{"location":"examples/infinite_horizon_hydro_thermal/#Infinite-horizon-hydro-thermal","page":"Infinite horizon hydro-thermal","title":"Infinite horizon hydro-thermal","text":"","category":"section"},{"location":"examples/infinite_horizon_hydro_thermal/","page":"Infinite horizon hydro-thermal","title":"Infinite horizon hydro-thermal","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/infinite_horizon_hydro_thermal/","page":"Infinite horizon hydro-thermal","title":"Infinite horizon hydro-thermal","text":"using SDDP, HiGHS, Test, Statistics\n\nfunction infinite_hydro_thermal(; cut_type)\n Ω = [\n (inflow = 0.0, demand = 7.5),\n (inflow = 5.0, demand = 5),\n (inflow = 10.0, demand = 2.5),\n ]\n graph = SDDP.Graph(\n :root_node,\n [:week],\n [(:root_node => :week, 1.0), (:week => :week, 0.9)],\n )\n model = SDDP.PolicyGraph(\n graph;\n lower_bound = 0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n @variable(\n subproblem,\n 5.0 <= reservoir <= 15.0,\n SDDP.State,\n initial_value = 10.0\n )\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n spill >= 0\n inflow\n demand\n end)\n @constraints(\n subproblem,\n begin\n reservoir.out == reservoir.in - hydro_generation - spill + inflow\n hydro_generation + thermal_generation == demand\n end\n )\n @stageobjective(subproblem, 10 * spill + thermal_generation)\n SDDP.parameterize(subproblem, Ω) do ω\n JuMP.fix(inflow, ω.inflow)\n return JuMP.fix(demand, ω.demand)\n end\n end\n SDDP.train(\n model;\n cut_type = cut_type,\n log_frequency = 100,\n sampling_scheme = SDDP.InSampleMonteCarlo(; terminate_on_cycle = true),\n parallel_scheme = SDDP.Serial(),\n cycle_discretization_delta = 0.1,\n )\n @test SDDP.calculate_bound(model) ≈ 119.167 atol = 0.1\n\n results = SDDP.simulate(model, 500)\n objectives =\n [sum(s[:stage_objective] for s in simulation) for simulation in results]\n sample_mean = round(Statistics.mean(objectives); digits = 2)\n sample_ci = round(1.96 * Statistics.std(objectives) / sqrt(500); digits = 2)\n println(\"Confidence_interval = $(sample_mean) ± $(sample_ci)\")\n @test sample_mean - sample_ci <= 119.167 <= sample_mean + sample_ci\n return\nend\n\ninfinite_hydro_thermal(; cut_type = SDDP.SINGLE_CUT)\ninfinite_hydro_thermal(; cut_type = SDDP.MULTI_CUT)","category":"page"},{"location":"apireference/#api_reference_list","page":"API Reference","title":"API Reference","text":"","category":"section"},{"location":"apireference/#Policy-graphs","page":"API Reference","title":"Policy graphs","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.Graph\nSDDP.add_node\nSDDP.add_edge\nSDDP.add_ambiguity_set\nSDDP.LinearGraph\nSDDP.MarkovianGraph\nSDDP.UnicyclicGraph\nSDDP.LinearPolicyGraph\nSDDP.MarkovianPolicyGraph\nSDDP.PolicyGraph","category":"page"},{"location":"apireference/#SDDP.Graph","page":"API Reference","title":"SDDP.Graph","text":"Graph(root_node::T) where T\n\nCreate an empty graph struture with the root node root_node.\n\nExample\n\njulia> graph = SDDP.Graph(0)\nRoot\n 0\nNodes\n {}\nArcs\n {}\n\njulia> graph = SDDP.Graph(:root)\nRoot\n root\nNodes\n {}\nArcs\n {}\n\njulia> graph = SDDP.Graph((0, 0))\nRoot\n (0, 0)\nNodes\n {}\nArcs\n {}\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.add_node","page":"API Reference","title":"SDDP.add_node","text":"add_node(graph::Graph{T}, node::T) where {T}\n\nAdd a node to the graph graph.\n\nExamples\n\njulia> graph = SDDP.Graph(:root);\n\njulia> SDDP.add_node(graph, :A)\n\njulia> graph\nRoot\n root\nNodes\n A\nArcs\n {}\n\njulia> graph = SDDP.Graph(0);\n\njulia> SDDP.add_node(graph, 2)\n\njulia> graph\nRoot\n 0\nNodes\n 2\nArcs\n {}\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.add_edge","page":"API Reference","title":"SDDP.add_edge","text":"add_edge(graph::Graph{T}, edge::Pair{T, T}, probability::Float64) where {T}\n\nAdd an edge to the graph graph.\n\nExamples\n\njulia> graph = SDDP.Graph(0);\n\njulia> SDDP.add_node(graph, 1)\n\njulia> SDDP.add_edge(graph, 0 => 1, 0.9)\n\njulia> graph\nRoot\n 0\nNodes\n 1\nArcs\n 0 => 1 w.p. 0.9\n\njulia> graph = SDDP.Graph(:root);\n\njulia> SDDP.add_node(graph, :A)\n\njulia> SDDP.add_edge(graph, :root => :A, 1.0)\n\njulia> graph\nRoot\n root\nNodes\n A\nArcs\n root => A w.p. 1.0\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.add_ambiguity_set","page":"API Reference","title":"SDDP.add_ambiguity_set","text":"add_ambiguity_set(\n graph::Graph{T},\n set::Vector{T},\n lipschitz::Vector{Float64},\n) where {T}\n\nAdd set to the belief partition of graph.\n\nlipschitz is a vector of Lipschitz constants, with one element for each node in set. The Lipschitz constant is the maximum slope of the cost-to-go function with respect to the belief state associated with each node at any point in the state-space.\n\nExamples\n\njulia> graph = SDDP.LinearGraph(3)\nRoot\n 0\nNodes\n 1\n 2\n 3\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 3 w.p. 1.0\n\njulia> SDDP.add_ambiguity_set(graph, [1, 2], [1e3, 1e2])\n\njulia> SDDP.add_ambiguity_set(graph, [3], [1e5])\n\njulia> graph\nRoot\n 0\nNodes\n 1\n 2\n 3\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 3 w.p. 1.0\nPartitions\n {1, 2}\n {3}\n\n\n\n\n\nadd_ambiguity_set(graph::Graph{T}, set::Vector{T}, lipschitz::Float64)\n\nAdd set to the belief partition of graph.\n\nlipschitz is a Lipschitz constant for each node in set. The Lipschitz constant is the maximum slope of the cost-to-go function with respect to the belief state associated with each node at any point in the state-space.\n\nExamples\n\njulia> graph = SDDP.LinearGraph(3);\n\njulia> SDDP.add_ambiguity_set(graph, [1, 2], 1e3)\n\njulia> SDDP.add_ambiguity_set(graph, [3], 1e5)\n\njulia> graph\nRoot\n 0\nNodes\n 1\n 2\n 3\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 3 w.p. 1.0\nPartitions\n {1, 2}\n {3}\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.LinearGraph","page":"API Reference","title":"SDDP.LinearGraph","text":"LinearGraph(stages::Int)\n\nCreate a linear graph with stages number of nodes.\n\nExamples\n\njulia> graph = SDDP.LinearGraph(3)\nRoot\n 0\nNodes\n 1\n 2\n 3\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 3 w.p. 1.0\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.MarkovianGraph","page":"API Reference","title":"SDDP.MarkovianGraph","text":"MarkovianGraph(transition_matrices::Vector{Matrix{Float64}})\n\nConstruct a Markovian graph from the vector of transition matrices.\n\ntransition_matrices[t][i, j] gives the probability of transitioning from Markov state i in stage t - 1 to Markov state j in stage t.\n\nThe dimension of the first transition matrix should be (1, N), and transition_matrics[1][1, i] is the probability of transitioning from the root node to the Markov state i.\n\nExamples\n\njulia> graph = SDDP.MarkovianGraph([ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]])\nRoot\n (0, 1)\nNodes\n (1, 1)\n (2, 1)\n (2, 2)\n (3, 1)\n (3, 2)\nArcs\n (0, 1) => (1, 1) w.p. 1.0\n (1, 1) => (2, 1) w.p. 0.5\n (1, 1) => (2, 2) w.p. 0.5\n (2, 1) => (3, 1) w.p. 0.8\n (2, 1) => (3, 2) w.p. 0.2\n (2, 2) => (3, 1) w.p. 0.2\n (2, 2) => (3, 2) w.p. 0.8\n\n\n\n\n\nMarkovianGraph(;\n stages::Int,\n transition_matrix::Matrix{Float64},\n root_node_transition::Vector{Float64},\n)\n\nConstruct a Markovian graph object with stages number of stages and time-independent Markov transition probabilities.\n\ntransition_matrix must be a square matrix, and the probability of transitioning from Markov state i in stage t to Markov state j in stage t + 1 is given by transition_matrix[i, j].\n\nroot_node_transition[i] is the probability of transitioning from the root node to Markov state i in the first stage.\n\nExamples\n\njulia> graph = SDDP.MarkovianGraph(;\n stages = 3,\n transition_matrix = [0.8 0.2; 0.2 0.8],\n root_node_transition = [0.5, 0.5],\n )\nRoot\n (0, 1)\nNodes\n (1, 1)\n (1, 2)\n (2, 1)\n (2, 2)\n (3, 1)\n (3, 2)\nArcs\n (0, 1) => (1, 1) w.p. 0.5\n (0, 1) => (1, 2) w.p. 0.5\n (1, 1) => (2, 1) w.p. 0.8\n (1, 1) => (2, 2) w.p. 0.2\n (1, 2) => (2, 1) w.p. 0.2\n (1, 2) => (2, 2) w.p. 0.8\n (2, 1) => (3, 1) w.p. 0.8\n (2, 1) => (3, 2) w.p. 0.2\n (2, 2) => (3, 1) w.p. 0.2\n (2, 2) => (3, 2) w.p. 0.8\n\n\n\n\n\nMarkovianGraph(\n simulator::Function;\n budget::Union{Int,Vector{Int}},\n scenarios::Int = 1000,\n)\n\nConstruct a Markovian graph by fitting Markov chain to scenarios generated by simulator().\n\nbudget is the total number of nodes in the resulting Markov chain. This can either be specified as a single Int, in which case we will attempt to intelligently distributed the nodes between stages. Alternatively, budget can be a Vector{Int}, which details the number of Markov state to have in each stage.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.UnicyclicGraph","page":"API Reference","title":"SDDP.UnicyclicGraph","text":"UnicyclicGraph(discount_factor::Float64; num_nodes::Int = 1)\n\nConstruct a graph composed of num_nodes nodes that form a single cycle, with a probability of discount_factor of continuing the cycle.\n\nExamples\n\njulia> graph = SDDP.UnicyclicGraph(0.9; num_nodes = 2)\nRoot\n 0\nNodes\n 1\n 2\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 1 w.p. 0.9\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.LinearPolicyGraph","page":"API Reference","title":"SDDP.LinearPolicyGraph","text":"LinearPolicyGraph(builder::Function; stages::Int, kwargs...)\n\nCreate a linear policy graph with stages number of stages.\n\nKeyword arguments\n\nstages: the number of stages in the graph\nkwargs: other keyword arguments are passed to SDDP.PolicyGraph.\n\nExamples\n\njulia> SDDP.LinearPolicyGraph(; stages = 2, lower_bound = 0.0) do sp, t\n # ... build model ...\nend\nA policy graph with 2 nodes.\nNode indices: 1, 2\n\nis equivalent to\n\njulia> graph = SDDP.LinearGraph(2);\n\njulia> SDDP.PolicyGraph(graph; lower_bound = 0.0) do sp, t\n # ... build model ...\nend\nA policy graph with 2 nodes.\nNode indices: 1, 2\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.MarkovianPolicyGraph","page":"API Reference","title":"SDDP.MarkovianPolicyGraph","text":"MarkovianPolicyGraph(\n builder::Function;\n transition_matrices::Vector{Array{Float64,2}},\n kwargs...\n)\n\nCreate a Markovian policy graph based on the transition matrices given in transition_matrices.\n\nKeyword arguments\n\ntransition_matrices[t][i, j] gives the probability of transitioning from Markov state i in stage t - 1 to Markov state j in stage t. The dimension of the first transition matrix should be (1, N), and transition_matrics[1][1, i] is the probability of transitioning from the root node to the Markov state i.\nkwargs: other keyword arguments are passed to SDDP.PolicyGraph.\n\nSee also\n\nSee SDDP.MarkovianGraph for other ways of specifying a Markovian policy graph.\n\nSee SDDP.PolicyGraph for the other keyword arguments.\n\nExamples\n\njulia> SDDP.MarkovianPolicyGraph(;\n transition_matrices = [ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]],\n lower_bound = 0.0,\n ) do sp, node\n # ... build model ...\n end\nA policy graph with 5 nodes.\n Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)\n\nis equivalent to\n\njulia> graph = SDDP.MarkovianGraph([ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]]);\n\njulia> SDDP.PolicyGraph(graph; lower_bound = 0.0) do sp, t\n # ... build model ...\nend\nA policy graph with 5 nodes.\n Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.PolicyGraph","page":"API Reference","title":"SDDP.PolicyGraph","text":"PolicyGraph(\n builder::Function,\n graph::Graph{T};\n sense::Symbol = :Min,\n lower_bound = -Inf,\n upper_bound = Inf,\n optimizer = nothing,\n) where {T}\n\nConstruct a policy graph based on the graph structure of graph. (See SDDP.Graph for details.)\n\nKeyword arguments\n\nsense: whether we are minimizing (:Min) or maximizing (:Max).\nlower_bound: if mimimizing, a valid lower bound for the cost to go in all subproblems.\nupper_bound: if maximizing, a valid upper bound for the value to go in all subproblems.\noptimizer: the optimizer to use for each of the subproblems\n\nExamples\n\nfunction builder(subproblem::JuMP.Model, index)\n # ... subproblem definition ...\nend\n\nmodel = PolicyGraph(\n builder,\n graph;\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n)\n\nOr, using the Julia do ... end syntax:\n\nmodel = PolicyGraph(\n graph;\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, index\n # ... subproblem definitions ...\nend\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Subproblem-definition","page":"API Reference","title":"Subproblem definition","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"@stageobjective\nSDDP.parameterize\nSDDP.add_objective_state\nSDDP.objective_state\nSDDP.Noise","category":"page"},{"location":"apireference/#SDDP.@stageobjective","page":"API Reference","title":"SDDP.@stageobjective","text":"@stageobjective(subproblem, expr)\n\nSet the stage-objective of subproblem to expr.\n\nExamples\n\n@stageobjective(subproblem, 2x + y)\n\n\n\n\n\n","category":"macro"},{"location":"apireference/#SDDP.parameterize","page":"API Reference","title":"SDDP.parameterize","text":"parameterize(\n modify::Function,\n subproblem::JuMP.Model,\n realizations::Vector{T},\n probability::Vector{Float64} = fill(1.0 / length(realizations))\n) where {T}\n\nAdd a parameterization function modify to subproblem. The modify function takes one argument and modifies subproblem based on the realization of the noise sampled from realizations with corresponding probabilities probability.\n\nIn order to conduct an out-of-sample simulation, modify should accept arguments that are not in realizations (but still of type T).\n\nExamples\n\nSDDP.parameterize(subproblem, [1, 2, 3], [0.4, 0.3, 0.3]) do ω\n JuMP.set_upper_bound(x, ω)\nend\n\n\n\n\n\nparameterize(node::Node, noise)\n\nParameterize node node with the noise noise.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.add_objective_state","page":"API Reference","title":"SDDP.add_objective_state","text":"add_objective_state(update::Function, subproblem::JuMP.Model; kwargs...)\n\nAdd an objective state variable to subproblem.\n\nRequired kwargs are:\n\ninitial_value: The initial value of the objective state variable at the root node.\nlipschitz: The lipschitz constant of the objective state variable.\n\nSetting a tight value for the lipschitz constant can significantly improve the speed of convergence.\n\nOptional kwargs are:\n\nlower_bound: A valid lower bound for the objective state variable. Can be -Inf.\nupper_bound: A valid upper bound for the objective state variable. Can be +Inf.\n\nSetting tight values for these optional variables can significantly improve the speed of convergence.\n\nIf the objective state is N-dimensional, each keyword argument must be an NTuple{N,Float64}. For example, initial_value = (0.0, 1.0).\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.objective_state","page":"API Reference","title":"SDDP.objective_state","text":"objective_state(subproblem::JuMP.Model)\n\nReturn the current objective state of the problem.\n\nCan only be called from SDDP.parameterize.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.Noise","page":"API Reference","title":"SDDP.Noise","text":"Noise(support, probability)\n\nAn atom of a discrete random variable at the point of support support and associated probability probability.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Training-the-policy","page":"API Reference","title":"Training the policy","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.numerical_stability_report\nSDDP.train\nSDDP.termination_status\nSDDP.write_cuts_to_file\nSDDP.read_cuts_from_file\nSDDP.write_log_to_csv\nSDDP.set_numerical_difficulty_callback","category":"page"},{"location":"apireference/#SDDP.numerical_stability_report","page":"API Reference","title":"SDDP.numerical_stability_report","text":"numerical_stability_report(\n [io::IO = stdout,]\n model::PolicyGraph;\n by_node::Bool = false,\n print::Bool = true,\n warn::Bool = true,\n)\n\nPrint a report identifying possible numeric stability issues.\n\nKeyword arguments\n\nIf by_node, print a report for each node in the graph.\nIf print, print to io.\nIf warn, warn if the coefficients may cause numerical issues.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.train","page":"API Reference","title":"SDDP.train","text":"SDDP.train(model::PolicyGraph; kwargs...)\n\nTrain the policy for model.\n\nKeyword arguments\n\niteration_limit::Int: number of iterations to conduct before termination.\ntime_limit::Float64: number of seconds to train before termination.\nstoping_rules: a vector of SDDP.AbstractStoppingRules. Defaults to SimulationStoppingRule.\nprint_level::Int: control the level of printing to the screen. Defaults to 1. Set to 0 to disable all printing.\nlog_file::String: filepath at which to write a log of the training progress. Defaults to SDDP.log.\nlog_frequency::Int: control the frequency with which the logging is outputted (iterations/log). It must be at least 1. Defaults to 1.\nlog_every_seconds::Float64: control the frequency with which the logging is outputted (seconds/log). Defaults to 0.0.\nlog_every_iteration::Bool; over-rides log_frequency and log_every_seconds to force every iteration to be printed. Defaults to false.\nrun_numerical_stability_report::Bool: generate (and print) a numerical stability report prior to solve. Defaults to true.\nrefine_at_similar_nodes::Bool: if SDDP can detect that two nodes have the same children, it can cheaply add a cut discovered at one to the other. In almost all cases this should be set to true.\ncut_deletion_minimum::Int: the minimum number of cuts to cache before deleting cuts from the subproblem. The impact on performance is solver specific; however, smaller values result in smaller subproblems (and therefore quicker solves), at the expense of more time spent performing cut selection.\nrisk_measure: the risk measure to use at each node. Defaults to Expectation.\nsampling_scheme: a sampling scheme to use on the forward pass of the algorithm. Defaults to InSampleMonteCarlo.\nbackward_sampling_scheme: a backward pass sampling scheme to use on the backward pass of the algorithm. Defaults to CompleteSampler.\ncut_type: choose between SDDP.SINGLE_CUT and SDDP.MULTI_CUT versions of SDDP.\ndashboard::Bool: open a visualization of the training over time. Defaults to false.\nparallel_scheme::AbstractParallelScheme: specify a scheme for solving in parallel. Defaults to Threaded().\nforward_pass::AbstractForwardPass: specify a scheme to use for the forward passes.\nforward_pass_resampling_probability::Union{Nothing,Float64}: set to a value in (0, 1) to enable RiskAdjustedForwardPass. Defaults to nothing (disabled).\nadd_to_existing_cuts::Bool: set to true to allow training a model that was previously trained. Defaults to false.\nduality_handler::AbstractDualityHandler: specify a duality handler to use when creating cuts.\npost_iteration_callback::Function: a callback with the signature post_iteration_callback(::IterationResult) that is evaluated after each iteration of the algorithm.\n\nThere is also a special option for infinite horizon problems\n\ncycle_discretization_delta: the maximum distance between states allowed on the forward pass. This is for advanced users only and needs to be used in conjunction with a different sampling_scheme.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.termination_status","page":"API Reference","title":"SDDP.termination_status","text":"termination_status(model::PolicyGraph)::Symbol\n\nQuery the reason why the training stopped.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.write_cuts_to_file","page":"API Reference","title":"SDDP.write_cuts_to_file","text":"write_cuts_to_file(\n model::PolicyGraph{T},\n filename::String;\n kwargs...,\n) where {T}\n\nWrite the cuts that form the policy in model to filename in JSON format.\n\nKeyword arguments\n\nnode_name_parser is a function which converts the name of each node into a string representation. It has the signature: node_name_parser(::T)::String.\nwrite_only_selected_cuts write only the selected cuts to the json file. Defaults to false.\n\nSee also SDDP.read_cuts_from_file.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.read_cuts_from_file","page":"API Reference","title":"SDDP.read_cuts_from_file","text":"read_cuts_from_file(\n model::PolicyGraph{T},\n filename::String;\n kwargs...,\n) where {T}\n\nRead cuts (saved using SDDP.write_cuts_to_file) from filename into model.\n\nSince T can be an arbitrary Julia type, the conversion to JSON is lossy. When reading, read_cuts_from_file only supports T=Int, T=NTuple{N, Int}, and T=Symbol. If you have manually created a policy graph with a different node type T, provide a function node_name_parser with the signature\n\nKeyword arguments\n\nnode_name_parser(T, name::String)::T where {T} that returns the name of each node given the string name name. If node_name_parser returns nothing, those cuts are skipped.\ncut_selection::Bool run or not the cut selection algorithm when adding the cuts to the model.\n\nSee also SDDP.write_cuts_to_file.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.write_log_to_csv","page":"API Reference","title":"SDDP.write_log_to_csv","text":"write_log_to_csv(model::PolicyGraph, filename::String)\n\nWrite the log of the most recent training to a csv for post-analysis.\n\nAssumes that the model has been trained via SDDP.train.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.set_numerical_difficulty_callback","page":"API Reference","title":"SDDP.set_numerical_difficulty_callback","text":"set_numerical_difficulty_callback(\n model::PolicyGraph,\n callback::Function,\n)\n\nSet a callback function callback(::PolicyGraph, ::Node; require_dual::Bool) that is run when the optimizer terminates without finding a primal solution (and dual solution if require_dual is true).\n\nDefault callback\n\nThe default callback is a small variation of:\n\nfunction callback(::PolicyGraph, node::Node; require_dual::Bool)\n MOI.Utilities.reset_optimizer(node.subproblem)\n optimize!(node.subproblem)\n return\nend\n\nThis callback is the default because a common issue is solvers declaring the infeasible because of numerical issues related to the large number of cutting planes. Resetting the subproblem–-and therefore starting from a fresh problem instead of warm-starting from the previous solution–-is often enough to fix the problem and allow more iterations.\n\nOther callbacks\n\nIn cases where the problem is truely infeasible (not because of numerical issues ), it may be helpful to write out the irreducible infeasible subsystem (IIS) for debugging. For this use-case, use a callback as follows:\n\nfunction callback(::PolicyGraph, node::Node; require_dual::Bool)\n JuMP.compute_conflict!(node.suprobblem)\n status = JuMP.get_attribute(node.subproblem, MOI.ConflictStatus())\n if status == MOI.CONFLICT_FOUND\n iis_model, _ = JuMP.copy_conflict(node.subproblem)\n print(iis_model)\n end\n return\nend\nSDDP.set_numerical_difficulty_callback(model, callback)\n\n\n\n\n\n","category":"function"},{"location":"apireference/#api_stopping_rules","page":"API Reference","title":"Stopping rules","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.AbstractStoppingRule\nSDDP.stopping_rule_status\nSDDP.convergence_test\nSDDP.IterationLimit\nSDDP.TimeLimit\nSDDP.Statistical\nSDDP.BoundStalling\nSDDP.StoppingChain\nSDDP.SimulationStoppingRule\nSDDP.FirstStageStoppingRule","category":"page"},{"location":"apireference/#SDDP.AbstractStoppingRule","page":"API Reference","title":"SDDP.AbstractStoppingRule","text":"AbstractStoppingRule\n\nThe abstract type for the stopping-rule interface.\n\nYou need to define the following methods:\n\nSDDP.stopping_rule_status\nSDDP.convergence_test\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.stopping_rule_status","page":"API Reference","title":"SDDP.stopping_rule_status","text":"stopping_rule_status(::AbstractStoppingRule)::Symbol\n\nReturn a symbol describing the stopping rule.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.convergence_test","page":"API Reference","title":"SDDP.convergence_test","text":"convergence_test(\n model::PolicyGraph,\n log::Vector{Log},\n ::AbstractStoppingRule,\n)::Bool\n\nReturn a Bool indicating if the algorithm should terminate the training.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.IterationLimit","page":"API Reference","title":"SDDP.IterationLimit","text":"IterationLimit(limit::Int)\n\nTeriminate the algorithm after limit number of iterations.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.TimeLimit","page":"API Reference","title":"SDDP.TimeLimit","text":"TimeLimit(limit::Float64)\n\nTeriminate the algorithm after limit seconds of computation.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.Statistical","page":"API Reference","title":"SDDP.Statistical","text":"Statistical(;\n num_replications::Int,\n iteration_period::Int = 1,\n z_score::Float64 = 1.96,\n verbose::Bool = true,\n disable_warning::Bool = false,\n)\n\nPerform an in-sample Monte Carlo simulation of the policy with num_replications replications every iteration_periods and terminate if the deterministic bound (lower if minimizing) falls into the confidence interval for the mean of the simulated cost.\n\nIf verbose = true, print the confidence interval.\n\nIf disable_warning = true, disable the warning telling you not to use this stopping rule (see below).\n\nWhy this stopping rule is not good\n\nThis stopping rule is one of the most common stopping rules seen in the literature. Don't follow the crowd. It is a poor choice for your model, and should be rarely used. Instead, you should use the default stopping rule, or use a fixed limit like a time or iteration limit.\n\nTo understand why this stopping rule is a bad idea, assume we have conducted num_replications simulations and the objectives are in a vector objectives::Vector{Float64}.\n\nOur mean is μ = mean(objectives) and the half-width of the confidence interval is w = z_score * std(objectives) / sqrt(num_replications).\n\nMany papers suggest terminating the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) is contained within the confidence interval. That is, if μ - w <= bound <= μ + w. Even worse, some papers define an optimization gap of (μ + w) / bound (if minimizing) or (μ - w) / bound (if maximizing), and they terminate once the gap is less than a value like 1%.\n\nBoth of these approaches are misleading, and more often than not, they will result in terminating with a sub-optimal policy that performs worse than expected. There are two main reasons for this:\n\nThe half-width depends on the number of replications. To reduce the computational cost, users are often tempted to choose a small number of replications. This increases the half-width and makes it more likely that the algorithm will stop early. But if we choose a large number of replications, then the computational cost is high, and we would have been better off to run a fixed number of iterations and use that computational time to run extra training iterations.\nThe confidence interval assumes that the simulated values are normally distributed. In infinite horizon models, this is almost never the case. The distribution is usually closer to exponential or log-normal.\n\nThere is a third, more technical reason which relates to the conditional dependence of constructing multiple confidence intervals.\n\nThe default value of z_score = 1.96 corresponds to a 95% confidence interval. You should interpret the interval as \"if we re-run this simulation 100 times, then the true mean will lie in the confidence interval 95 times out of 100.\" But if the bound is within the confidence interval, then we know the true mean cannot be better than the bound. Therfore, there is a more than 95% chance that the mean is within the interval.\n\nA separate problem arises if we simulate, find that the bound is outside the confidence interval, keep training, and then re-simulate to compute a new confidence interval. Because we will terminate when the bound enters the confidence interval, the repeated construction of a confidence interval means that the unconditional probability that we terminate with a false positive is larger than 5% (there are now more chances that the sample mean is optimistic and that the confidence interval includes the bound but not the true mean). One fix is to simulate with a sequentially increasing number of replicates, so that the unconditional probability stays at 95%, but this runs into the problem of computational cost. For more information on sequential sampling, see, for example, Güzin Bayraksan, David P. Morton, (2011) A Sequential Sampling Procedure for Stochastic Programming. Operations Research 59(4):898-913.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.BoundStalling","page":"API Reference","title":"SDDP.BoundStalling","text":"BoundStalling(num_previous_iterations::Int, tolerance::Float64)\n\nTeriminate the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) fails to improve by more than tolerance in absolute terms for more than num_previous_iterations consecutve iterations, provided it has improved relative to the bound after the first iteration.\n\nChecking for an improvement relative to the first iteration avoids early termination in a situation where the bound fails to improve for the first N iterations. This frequently happens in models with a large number of stages, where it takes time for the cuts to propogate backward enough to modify the bound of the root node.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.StoppingChain","page":"API Reference","title":"SDDP.StoppingChain","text":"StoppingChain(rules::AbstractStoppingRule...)\n\nTerminate once all of the rules are statified.\n\nThis stopping rule short-circuits, so subsequent rules are only tested if the previous pass.\n\nExamples\n\nA stopping rule that runs 100 iterations, then checks for the bound stalling:\n\nStoppingChain(IterationLimit(100), BoundStalling(5, 0.1))\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.SimulationStoppingRule","page":"API Reference","title":"SDDP.SimulationStoppingRule","text":"SimulationStoppingRule(;\n sampling_scheme::AbstractSamplingScheme = SDDP.InSampleMonteCarlo(),\n replications::Int = -1,\n period::Int = -1,\n distance_tol::Float64 = 1e-2,\n bound_tol::Float64 = 1e-4,\n)\n\nTerminate the algorithm using a mix of heuristics. Unless you know otherwise, this is typically a good default.\n\nTermination criteria\n\nFirst, we check that the deterministic bound has stabilized. That is, over the last five iterations, the deterministic bound has changed by less than an absolute or relative tolerance of bound_tol.\n\nThen, if we have not done one in the last period iterations, we perform a primal simulation of the policy using replications out-of-sample realizations from sampling_scheme. The realizations are stored and re-used in each simulation. From each simulation, we record the value of the stage objective. We terminate the policy if each of the trajectories in two consecutive simulations differ by less than distance_tol.\n\nBy default, replications and period are -1, and SDDP.jl will guess good values for these. Over-ride the default behavior by setting an appropriate value.\n\nExample\n\nSDDP.train(model; stopping_rules = [SimulationStoppingRule()])\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.FirstStageStoppingRule","page":"API Reference","title":"SDDP.FirstStageStoppingRule","text":"FirstStageStoppingRule(; atol::Float64 = 1e-3, iterations::Int = 50)\n\nTerminate the algorithm when the outgoing values of the first-stage state variables have not changed by more than atol for iterations number of consecutive iterations.\n\nExample\n\nSDDP.train(model; stopping_rules = [FirstStageStoppingRule()])\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Sampling-schemes","page":"API Reference","title":"Sampling schemes","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.AbstractSamplingScheme\nSDDP.sample_scenario\nSDDP.InSampleMonteCarlo\nSDDP.OutOfSampleMonteCarlo\nSDDP.Historical\nSDDP.PSRSamplingScheme\nSDDP.SimulatorSamplingScheme","category":"page"},{"location":"apireference/#SDDP.AbstractSamplingScheme","page":"API Reference","title":"SDDP.AbstractSamplingScheme","text":"AbstractSamplingScheme\n\nThe abstract type for the sampling-scheme interface.\n\nYou need to define the following methods:\n\nSDDP.sample_scenario\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.sample_scenario","page":"API Reference","title":"SDDP.sample_scenario","text":"sample_scenario(graph::PolicyGraph{T}, ::AbstractSamplingScheme) where {T}\n\nSample a scenario from the policy graph graph based on the sampling scheme.\n\nReturns ::Tuple{Vector{Tuple{T, <:Any}}, Bool}, where the first element is the scenario, and the second element is a Boolean flag indicating if the scenario was terminated due to the detection of a cycle.\n\nThe scenario is a list of tuples (type Vector{Tuple{T, <:Any}}) where the first component of each tuple is the index of the node, and the second component is the stagewise-independent noise term observed in that node.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.InSampleMonteCarlo","page":"API Reference","title":"SDDP.InSampleMonteCarlo","text":"InSampleMonteCarlo(;\n max_depth::Int = 0,\n terminate_on_cycle::Function = false,\n terminate_on_dummy_leaf::Function = true,\n rollout_limit::Function = (i::Int) -> typemax(Int),\n initial_node::Any = nothing,\n)\n\nA Monte Carlo sampling scheme using the in-sample data from the policy graph definition.\n\nIf terminate_on_cycle, terminate the forward pass once a cycle is detected. If max_depth > 0, return once max_depth nodes have been sampled. If terminate_on_dummy_leaf, terminate the forward pass with 1 - probability of sampling a child node.\n\nNote that if terminate_on_cycle = false and terminate_on_dummy_leaf = false then max_depth must be set > 0.\n\nControl which node the trajectories start from using initial_node. If it is left as nothing, the root node is used as the starting node.\n\nYou can use rollout_limit to set iteration specific depth limits. For example:\n\nInSampleMonteCarlo(rollout_limit = i -> 2 * i)\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.OutOfSampleMonteCarlo","page":"API Reference","title":"SDDP.OutOfSampleMonteCarlo","text":"OutOfSampleMonteCarlo(\n f::Function,\n graph::PolicyGraph;\n use_insample_transition::Bool = false,\n max_depth::Int = 0,\n terminate_on_cycle::Bool = false,\n terminate_on_dummy_leaf::Bool = true,\n rollout_limit::Function = i -> typemax(Int),\n initial_node = nothing,\n)\n\nCreate a Monte Carlo sampler using out-of-sample probabilities and/or supports for the stagewise-independent noise terms, and out-of-sample probabilities for the node-transition matrix.\n\nf is a function that takes the name of a node and returns a tuple containing a vector of new SDDP.Noise terms for the children of that node, and a vector of new SDDP.Noise terms for the stagewise-independent noise.\n\nIf f is called with the name of the root node (e.g., 0 in a linear policy graph, (0, 1) in a Markovian Policy Graph), then return a vector of SDDP.Noise for the children of the root node.\n\nIf use_insample_transition, the in-sample transition probabilities will be used. Therefore, f should only return a vector of the stagewise-independent noise terms, and f will not be called for the root node.\n\nIf terminate_on_cycle, terminate the forward pass once a cycle is detected. If max_depth > 0, return once max_depth nodes have been sampled. If terminate_on_dummy_leaf, terminate the forward pass with 1 - probability of sampling a child node.\n\nNote that if terminate_on_cycle = false and terminate_on_dummy_leaf = false then max_depth must be set > 0.\n\nControl which node the trajectories start from using initial_node. If it is left as nothing, the root node is used as the starting node.\n\nIf a node is deterministic, pass [SDDP.Noise(nothing, 1.0)] as the vector of noise terms.\n\nYou can use rollout_limit to set iteration specific depth limits. For example:\n\nOutOfSampleMonteCarlo(rollout_limit = i -> 2 * i)\n\nExamples\n\nGiven linear policy graph graph with T stages:\n\nsampler = OutOfSampleMonteCarlo(graph) do node\n if node == 0\n return [SDDP.Noise(1, 1.0)]\n else\n noise_terms = [SDDP.Noise(node, 0.3), SDDP.Noise(node + 1, 0.7)]\n children = node < T ? [SDDP.Noise(node + 1, 0.9)] : SDDP.Noise{Int}[]\n return children, noise_terms\n end\nend\n\nGiven linear policy graph graph with T stages:\n\nsampler = OutOfSampleMonteCarlo(graph, use_insample_transition=true) do node\n return [SDDP.Noise(node, 0.3), SDDP.Noise(node + 1, 0.7)]\nend\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.Historical","page":"API Reference","title":"SDDP.Historical","text":"Historical(\n scenarios::Vector{Vector{Tuple{T,S}}},\n probability::Vector{Float64};\n terminate_on_cycle::Bool = false,\n) where {T,S}\n\nA sampling scheme that samples a scenario from the vector of scenarios scenarios according to probability.\n\nExamples\n\nHistorical(\n [\n [(1, 0.5), (2, 1.0), (3, 0.5)],\n [(1, 0.5), (2, 0.0), (3, 1.0)],\n [(1, 1.0), (2, 0.0), (3, 0.0)]\n ],\n [0.2, 0.5, 0.3],\n)\n\n\n\n\n\nHistorical(\n scenarios::Vector{Vector{Tuple{T,S}}};\n terminate_on_cycle::Bool = false,\n) where {T,S}\n\nA deterministic sampling scheme that iterates through the vector of provided scenarios.\n\nExamples\n\nHistorical([\n [(1, 0.5), (2, 1.0), (3, 0.5)],\n [(1, 0.5), (2, 0.0), (3, 1.0)],\n [(1, 1.0), (2, 0.0), (3, 0.0)],\n])\n\n\n\n\n\nHistorical(\n scenario::Vector{Tuple{T,S}};\n terminate_on_cycle::Bool = false,\n) where {T,S}\n\nA deterministic sampling scheme that always samples scenario.\n\nExamples\n\nHistorical([(1, 0.5), (2, 1.5), (3, 0.75)])\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.PSRSamplingScheme","page":"API Reference","title":"SDDP.PSRSamplingScheme","text":"PSRSamplingScheme(N::Int; sampling_scheme = InSampleMonteCarlo())\n\nA sampling scheme with N scenarios, similar to how PSR does it.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.SimulatorSamplingScheme","page":"API Reference","title":"SDDP.SimulatorSamplingScheme","text":"SimulatorSamplingScheme(simulator::Function)\n\nCreate a sampling scheme based on a univariate scenario generator simulator, which returns a Vector{Float64} when called with no arguments like simulator().\n\nThis sampling scheme must be used with a Markovian graph constructed from the same simulator.\n\nThe sample space for SDDP.parameterize must be a tuple with 1 or 2 values, value is the Markov state and the second value is the random variable for the current node. If the node is deterministic, use Ω = [(markov_state,)].\n\nThis sampling scheme generates a new scenario by calling simulator(), and then picking the sequence of nodes in the Markovian graph that is closest to the new trajectory.\n\nExample\n\njulia> using SDDP\n\njulia> import HiGHS\n\njulia> simulator() = cumsum(rand(10))\nsimulator (generic function with 1 method)\n\njulia> model = SDDP.PolicyGraph(\n SDDP.MarkovianGraph(simulator; budget = 20, scenarios = 100);\n sense = :Max,\n upper_bound = 12,\n optimizer = HiGHS.Optimizer,\n ) do sp, node\n t, markov_state = node\n @variable(sp, x >= 0, SDDP.State, initial_value = 1)\n @variable(sp, u >= 0)\n @constraint(sp, x.out == x.in - u)\n # Elements of Ω MUST be a tuple in which `markov_state` is the first\n # element.\n Ω = [(markov_state, (u = u_max,)) for u_max in (0.0, 0.5)]\n SDDP.parameterize(sp, Ω) do (markov_state, ω)\n set_upper_bound(u, ω.u)\n @stageobjective(sp, markov_state * u)\n end\n end;\n\njulia> SDDP.train(\n model;\n print_level = 0,\n iteration_limit = 10,\n sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),\n )\n\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Parallel-schemes","page":"API Reference","title":"Parallel schemes","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.AbstractParallelScheme\nSDDP.Serial\nSDDP.Threaded\nSDDP.Asynchronous","category":"page"},{"location":"apireference/#SDDP.AbstractParallelScheme","page":"API Reference","title":"SDDP.AbstractParallelScheme","text":"AbstractParallelScheme\n\nAbstract type for different parallelism schemes.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.Serial","page":"API Reference","title":"SDDP.Serial","text":"Serial()\n\nRun SDDP in serial mode.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.Threaded","page":"API Reference","title":"SDDP.Threaded","text":"Threaded()\n\nRun SDDP in multi-threaded mode.\n\nUse julia --threads N to start Julia with N threads. In most cases, you should pick N to be the number of physical cores on your machine.\n\ndanger: Danger\nThis plug-in is experimental, and parts of SDDP.jl may not be threadsafe. If you encounter any problems or crashes, please open a GitHub issue.\n\nExample\n\nSDDP.train(model; parallel_scheme = SDDP.Threaded())\nSDDP.simulate(model; parallel_scheme = SDDP.Threaded())\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.Asynchronous","page":"API Reference","title":"SDDP.Asynchronous","text":"Asynchronous(\n [init_callback::Function,]\n slave_pids::Vector{Int} = workers();\n use_master::Bool = true,\n)\n\nRun SDDP in asynchronous mode workers with pid's slave_pids.\n\nAfter initializing the models on each worker, call init_callback(model). Note that init_callback is run locally on the worker and not on the master thread.\n\nIf use_master is true, iterations are also conducted on the master process.\n\n\n\n\n\nAsynchronous(\n solver::Any,\n slave_pids::Vector{Int} = workers();\n use_master::Bool = true,\n)\n\nRun SDDP in asynchronous mode workers with pid's slave_pids.\n\nSet the optimizer on each worker by calling JuMP.set_optimizer(model, solver).\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Forward-passes","page":"API Reference","title":"Forward passes","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.AbstractForwardPass\nSDDP.DefaultForwardPass\nSDDP.RevisitingForwardPass\nSDDP.RiskAdjustedForwardPass\nSDDP.AlternativeForwardPass\nSDDP.AlternativePostIterationCallback\nSDDP.RegularizedForwardPass","category":"page"},{"location":"apireference/#SDDP.AbstractForwardPass","page":"API Reference","title":"SDDP.AbstractForwardPass","text":"AbstractForwardPass\n\nAbstract type for different forward passes.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.DefaultForwardPass","page":"API Reference","title":"SDDP.DefaultForwardPass","text":"DefaultForwardPass(; include_last_node::Bool = true)\n\nThe default forward pass.\n\nIf include_last_node = false and the sample terminated due to a cycle, then the last node (which forms the cycle) is omitted. This can be useful option to set when training, but it comes at the cost of not knowing which node formed the cycle (if there are multiple possibilities).\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.RevisitingForwardPass","page":"API Reference","title":"SDDP.RevisitingForwardPass","text":"RevisitingForwardPass(\n period::Int = 500;\n sub_pass::AbstractForwardPass = DefaultForwardPass(),\n)\n\nA forward pass scheme that generate period new forward passes (using sub_pass), then revisits all previously explored forward passes. This can be useful to encourage convergence at a diversity of points in the state-space.\n\nSet period = typemax(Int) to disable.\n\nFor example, if period = 2, then the forward passes will be revisited as follows: 1, 2, 1, 2, 3, 4, 1, 2, 3, 4, 5, 6, 1, 2, ....\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.RiskAdjustedForwardPass","page":"API Reference","title":"SDDP.RiskAdjustedForwardPass","text":"RiskAdjustedForwardPass(;\n forward_pass::AbstractForwardPass,\n risk_measure::AbstractRiskMeasure,\n resampling_probability::Float64,\n rejection_count::Int = 5,\n)\n\nA forward pass that resamples a previous forward pass with resampling_probability probability, and otherwise samples a new forward pass using forward_pass.\n\nThe forward pass to revisit is chosen based on the risk-adjusted (using risk_measure) probability of the cumulative stage objectives.\n\nNote that this objective corresponds to the first time we visited the trajectory. Subsequent visits may have improved things, but we don't have the mechanisms in-place to update it. Therefore, remove the forward pass from resampling consideration after rejection_count revisits.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.AlternativeForwardPass","page":"API Reference","title":"SDDP.AlternativeForwardPass","text":"AlternativeForwardPass(\n forward_model::SDDP.PolicyGraph{T};\n forward_pass::AbstractForwardPass = DefaultForwardPass(),\n)\n\nA forward pass that simulates using forward_model, which may be different to the model used in the backwards pass.\n\nWhen using this forward pass, you should almost always pass SDDP.AlternativePostIterationCallback to the post_iteration_callback argument of SDDP.train.\n\nThis forward pass is most useful when the forward_model is non-convex and we use a convex approximation of the model in the backward pass.\n\nFor example, in optimal power flow models, we can use an AC-OPF formulation as the forward_model and a DC-OPF formulation as the backward model.\n\nFor more details see the paper:\n\nRosemberg, A., and Street, A., and Garcia, J.D., and Valladão, D.M., and Silva, T., and Dowson, O. (2021). Assessing the cost of network simplifications in long-term hydrothermal dispatch planning models. IEEE Transactions on Sustainable Energy. 13(1), 196-206.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.AlternativePostIterationCallback","page":"API Reference","title":"SDDP.AlternativePostIterationCallback","text":"AlternativePostIterationCallback(forward_model::PolicyGraph)\n\nA post-iteration callback that should be used whenever SDDP.AlternativeForwardPass is used.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.RegularizedForwardPass","page":"API Reference","title":"SDDP.RegularizedForwardPass","text":"RegularizedForwardPass(;\n rho::Float64 = 0.05,\n forward_pass::AbstractForwardPass = DefaultForwardPass(),\n)\n\nA forward pass that regularizes the outgoing first-stage state variables with an L-infty trust-region constraint about the previous iteration's solution. Specifically, the bounds of the outgoing state variable x are updated from (l, u) to max(l, x^k - rho * (u - l)) <= x <= min(u, x^k + rho * (u - l)), where x^k is the optimal solution of x in the previous iteration. On the first iteration, the value of the state at the root node is used.\n\nBy default, rho is set to 5%, which seems to work well empirically.\n\nPass a different forward_pass to control the forward pass within the regularized forward pass.\n\nThis forward pass is largely intended to be used for investment problems in which the first stage makes a series of capacity decisions that then influence the rest of the graph. An error is thrown if the first stage problem is not deterministic, and states are silently skipped if they do not have finite bounds.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Risk-Measures","page":"API Reference","title":"Risk Measures","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.AbstractRiskMeasure\nSDDP.adjust_probability","category":"page"},{"location":"apireference/#SDDP.AbstractRiskMeasure","page":"API Reference","title":"SDDP.AbstractRiskMeasure","text":"AbstractRiskMeasure\n\nThe abstract type for the risk measure interface.\n\nYou need to define the following methods:\n\nSDDP.adjust_probability\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.adjust_probability","page":"API Reference","title":"SDDP.adjust_probability","text":"adjust_probability(\n measure::Expectation\n risk_adjusted_probability::Vector{Float64},\n original_probability::Vector{Float64},\n noise_support::Vector{Noise{T}},\n objective_realizations::Vector{Float64},\n is_minimization::Bool,\n) where {T}\n\n\n\n\n\n","category":"function"},{"location":"apireference/#Duality-handlers","page":"API Reference","title":"Duality handlers","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.AbstractDualityHandler\nSDDP.ContinuousConicDuality\nSDDP.LagrangianDuality\nSDDP.StrengthenedConicDuality\nSDDP.BanditDuality","category":"page"},{"location":"apireference/#SDDP.AbstractDualityHandler","page":"API Reference","title":"SDDP.AbstractDualityHandler","text":"AbstractDualityHandler\n\nThe abstract type for the duality handler interface.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.ContinuousConicDuality","page":"API Reference","title":"SDDP.ContinuousConicDuality","text":"ContinuousConicDuality()\n\nCompute dual variables in the backward pass using conic duality, relaxing any binary or integer restrictions as necessary.\n\nTheory\n\nGiven the problem\n\nmin Cᵢ(x̄, u, w) + θᵢ\n st (x̄, x′, u) in Xᵢ(w) ∩ S\n x̄ - x == 0 [λ]\n\nwhere S ⊆ ℝ×ℤ, we relax integrality and using conic duality to solve for λ in the problem:\n\nmin Cᵢ(x̄, u, w) + θᵢ\n st (x̄, x′, u) in Xᵢ(w)\n x̄ - x == 0 [λ]\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.LagrangianDuality","page":"API Reference","title":"SDDP.LagrangianDuality","text":"LagrangianDuality(;\n method::LocalImprovementSearch.AbstractSearchMethod =\n LocalImprovementSearch.BFGS(100),\n)\n\nObtain dual variables in the backward pass using Lagrangian duality.\n\nArguments\n\nmethod: the LocalImprovementSearch method for maximizing the Lagrangian dual problem.\n\nTheory\n\nGiven the problem\n\nmin Cᵢ(x̄, u, w) + θᵢ\n st (x̄, x′, u) in Xᵢ(w) ∩ S\n x̄ - x == 0 [λ]\n\nwhere S ⊆ ℝ×ℤ, we solve the problem max L(λ), where:\n\nL(λ) = min Cᵢ(x̄, u, w) + θᵢ - λ' h(x̄)\n st (x̄, x′, u) in Xᵢ(w) ∩ S\n\nand where h(x̄) = x̄ - x.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.StrengthenedConicDuality","page":"API Reference","title":"SDDP.StrengthenedConicDuality","text":"StrengthenedConicDuality()\n\nObtain dual variables in the backward pass using strengthened conic duality.\n\nTheory\n\nGiven the problem\n\nmin Cᵢ(x̄, u, w) + θᵢ\n st (x̄, x′, u) in Xᵢ(w) ∩ S\n x̄ - x == 0 [λ]\n\nwe first obtain an estimate for λ using ContinuousConicDuality.\n\nThen, we evaluate the Lagrangian function:\n\nL(λ) = min Cᵢ(x̄, u, w) + θᵢ - λ' (x̄ - x`)\n st (x̄, x′, u) in Xᵢ(w) ∩ S\n\nto obtain a better estimate of the intercept.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.BanditDuality","page":"API Reference","title":"SDDP.BanditDuality","text":"BanditDuality()\n\nFormulates the problem of choosing a duality handler as a multi-armed bandit problem. The arms to choose between are:\n\nContinuousConicDuality\nStrengthenedConicDuality\nLagrangianDuality\n\nOur problem isn't a typical multi-armed bandit for a two reasons:\n\nThe reward distribution is non-stationary (each arm converges to 0 as it keeps getting pulled.\nThe distribution of rewards is dependent on the history of the arms that were chosen.\n\nWe choose a very simple heuristic: pick the arm with the best mean + 1 standard deviation. That should ensure we consistently pick the arm with the best likelihood of improving the value function.\n\nIn future, we should consider discounting the rewards of earlier iterations, and focus more on the more-recent rewards.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Simulating-the-policy","page":"API Reference","title":"Simulating the policy","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.simulate\nSDDP.calculate_bound\nSDDP.add_all_cuts","category":"page"},{"location":"apireference/#SDDP.simulate","page":"API Reference","title":"SDDP.simulate","text":"simulate(\n model::PolicyGraph,\n number_replications::Int = 1,\n variables::Vector{Symbol} = Symbol[];\n sampling_scheme::AbstractSamplingScheme =\n InSampleMonteCarlo(),\n custom_recorders = Dict{Symbol, Function}(),\n duality_handler::Union{Nothing,AbstractDualityHandler} = nothing,\n skip_undefined_variables::Bool = false,\n parallel_scheme::AbstractParallelScheme = Serial(),\n incoming_state::Dict{String,Float64} = _initial_state(model),\n )::Vector{Vector{Dict{Symbol,Any}}}\n\nPerform a simulation of the policy model with number_replications replications.\n\nReturn data structure\n\nReturns a vector with one element for each replication. Each element is a vector with one-element for each node in the scenario that was sampled. Each element in that vector is a dictionary containing information about the subproblem that was solved.\n\nIn that dictionary there are four special keys:\n\n:node_index, which records the index of the sampled node in the policy model\n:noise_term, which records the noise observed at the node\n:stage_objective, which records the stage-objective of the subproblem\n:bellman_term, which records the cost/value-to-go of the node.\n\nThe sum of :stage_objective + :bellman_term will equal the objective value of the solved subproblem.\n\nIn addition to the special keys, the dictionary will contain the result of key => JuMP.value(subproblem[key]) for each key in variables. This is useful to obtain the primal value of the state and control variables.\n\nPositonal arguments\n\nmodel: the model to simulate\nnumber_replications::Int = 1: the number of simulation replications to conduct, that is, the length of the simulation vector that is returned by this function. If omitted, this defaults to 1.`\nvariables::Vector{Symbol} = Symbol[]: a list of the variable names to record the value of in each stage.\n\nKeyword arguments\n\nsampling_scheme: the sampling scheme used when simulating.\ncustom_recorders: see Custom recorders section below.\nduality_handler: the SDDP.AbstractDualityHandler used to compute dual variables. If you do not require dual variables (or if they are not available), pass duality_handler = nothing.\nskip_undefined_variables: If you attempt to simulate the value of a variable that is only defined in some of the stage problems, an error will be thrown. To over-ride this (and return a NaN instead), pass skip_undefined_variables = true.\nparallel_scheme: Use parallel_scheme::[AbstractParallelScheme](@ref) to specify a scheme for simulating in parallel. Defaults to Serial.\ninitial_state: Use incoming_state to pass an initial value of the state variable, if it differs from that at the root node. Each key should be the string name of the state variable.\n\nCustom recorders\n\nFor more complicated data, the custom_recorders keyword argument can be used.\n\nFor example, to record the dual of a constraint named my_constraint, pass the following:\n\nsimulation_results = SDDP.simulate(model, 2;\n custom_recorders = Dict{Symbol, Function}(\n :constraint_dual => sp -> JuMP.dual(sp[:my_constraint])\n )\n)\n\nThe value of the dual in the first stage of the second replication can be accessed as:\n\nsimulation_results[2][1][:constraint_dual]\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.calculate_bound","page":"API Reference","title":"SDDP.calculate_bound","text":"SDDP.calculate_bound(\n model::PolicyGraph,\n state::Dict{Symbol,Float64} = model.initial_root_state;\n risk_measure::AbstractRiskMeasure = Expectation(),\n)\n\nCalculate the lower bound (if minimizing, otherwise upper bound) of the problem model at the point state, assuming the risk measure at the root node is risk_measure.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.add_all_cuts","page":"API Reference","title":"SDDP.add_all_cuts","text":"add_all_cuts(model::PolicyGraph)\n\nAdd all cuts that may have been deleted back into the model.\n\nExplanation\n\nDuring the solve, SDDP.jl may decide to remove cuts for a variety of reasons.\n\nThese can include cuts that define the optimal value function, particularly around the extremes of the state-space (e.g., reservoirs empty).\n\nThis function ensures that all cuts discovered are added back into the model.\n\nYou should call this after train and before simulate.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#Decision-rules","page":"API Reference","title":"Decision rules","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.DecisionRule\nSDDP.evaluate","category":"page"},{"location":"apireference/#SDDP.DecisionRule","page":"API Reference","title":"SDDP.DecisionRule","text":"DecisionRule(model::PolicyGraph{T}; node::T)\n\nCreate a decision rule for node node in model.\n\nExample\n\nrule = SDDP.DecisionRule(model; node = 1)\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.evaluate","page":"API Reference","title":"SDDP.evaluate","text":"evaluate(\n rule::DecisionRule;\n incoming_state::Dict{Symbol,Float64},\n noise = nothing,\n controls_to_record = Symbol[],\n)\n\nEvalute the decision rule rule at the point described by the incoming_state and noise.\n\nIf the node is deterministic, omit the noise argument.\n\nPass a list of symbols to controls_to_record to save the optimal primal solution corresponding to the names registered in the model.\n\n\n\n\n\nevaluate(\n V::ValueFunction,\n point::Dict{Union{Symbol,String},<:Real}\n objective_state = nothing,\n belief_state = nothing\n)\n\nEvaluate the value function V at point in the state-space.\n\nReturns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.\n\nExamples\n\nevaluate(V, Dict(:volume => 1.0))\n\nIf the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:\n\nevaluate(V, Dict(Symbol(\"volume[1]\") => 1.0))\n\nYou can also use strings or symbols for the keys.\n\nevaluate(V, Dict(\"volume[1]\" => 1))\n\n\n\n\n\nevalute(V::ValueFunction{Nothing, Nothing}; kwargs...)\n\nEvalute the value function V at the point in the state-space specified by kwargs.\n\nExamples\n\nevaluate(V; volume = 1)\n\n\n\n\n\nevaluate(\n model::PolicyGraph{T},\n validation_scenarios::ValidationScenarios{T,S},\n) where {T,S}\n\nEvaluate the performance of the policy contained in model after a call to train on the scenarios specified by validation_scenarios.\n\nExamples\n\nmodel, validation_scenarios = read_from_file(\"my_model.sof.json\")\ntrain(model; iteration_limit = 100)\nsimulations = evaluate(model, validation_scenarios)\n\n\n\n\n\n","category":"function"},{"location":"apireference/#Visualizing-the-policy","page":"API Reference","title":"Visualizing the policy","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.SpaghettiPlot\nSDDP.add_spaghetti\nSDDP.publication_plot\nSDDP.ValueFunction\nSDDP.evaluate(::SDDP.ValueFunction, ::Dict{Symbol,Float64})\nSDDP.plot","category":"page"},{"location":"apireference/#SDDP.SpaghettiPlot","page":"API Reference","title":"SDDP.SpaghettiPlot","text":"SDDP.SpaghettiPlot(; stages, scenarios)\n\nInitialize a new SpaghettiPlot with stages stages and scenarios number of replications.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.add_spaghetti","page":"API Reference","title":"SDDP.add_spaghetti","text":"SDDP.add_spaghetti(data_function::Function, plt::SpaghettiPlot; kwargs...)\n\nDescription\n\nAdd a new figure to the SpaghettiPlot plt, where the y-value of the scenarioth line when x = stage is given by data_function(plt.simulations[scenario][stage]).\n\nKeyword arguments\n\nxlabel: set the xaxis label\nylabel: set the yaxis label\ntitle: set the title of the plot\nymin: set the minimum y value\nymax: set the maximum y value\ncumulative: plot the additive accumulation of the value across the stages\ninterpolate: interpolation method for lines between stages.\n\nDefaults to \"linear\" see the d3 docs \tfor all options.\n\nExamples\n\nsimulations = simulate(model, 10)\nplt = SDDP.spaghetti_plot(simulations)\nSDDP.add_spaghetti(plt; title = \"Stage objective\") do data\n return data[:stage_objective]\nend\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.publication_plot","page":"API Reference","title":"SDDP.publication_plot","text":"SDDP.publication_plot(\n data_function, simulations;\n quantile = [0.0, 0.1, 0.25, 0.5, 0.75, 0.9, 1.0],\n kwargs...)\n\nCreate a Plots.jl recipe plot of the simulations.\n\nSee Plots.jl for the list of keyword arguments.\n\nExamples\n\nSDDP.publication_plot(simulations; title = \"My title\") do data\n return data[:stage_objective]\nend\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.ValueFunction","page":"API Reference","title":"SDDP.ValueFunction","text":"ValueFunction\n\nA representation of the value function. SDDP.jl uses the following unique representation of the value function that is undocumented in the literature.\n\nIt supports three types of state variables:\n\nx - convex \"resource\" states\nb - concave \"belief\" states\ny - concave \"objective\" states\n\nIn addition, we have three types of cuts:\n\nSingle-cuts (also called \"average\" cuts in the literature), which involve the risk-adjusted expectation of the cost-to-go.\nMulti-cuts, which use a different cost-to-go term for each realization w.\nRisk-cuts, which correspond to the facets of the dual interpretation of a coherent risk measure.\n\nTherefore, ValueFunction returns a JuMP model of the following form:\n\nV(x, b, y) = min: μᵀb + νᵀy + θ\n s.t. # \"Single\" / \"Average\" cuts\n μᵀb(j) + νᵀy(j) + θ >= α(j) + xᵀβ(j), ∀ j ∈ J\n # \"Multi\" cuts\n μᵀb(k) + νᵀy(k) + φ(w) >= α(k, w) + xᵀβ(k, w), ∀w ∈ Ω, k ∈ K\n # \"Risk-set\" cuts\n θ ≥ Σ{p(k, w) * φ(w)}_w - μᵀb(k) - νᵀy(k), ∀ k ∈ K\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.evaluate-Tuple{SDDP.ValueFunction, Dict{Symbol, Float64}}","page":"API Reference","title":"SDDP.evaluate","text":"evaluate(\n V::ValueFunction,\n point::Dict{Union{Symbol,String},<:Real}\n objective_state = nothing,\n belief_state = nothing\n)\n\nEvaluate the value function V at point in the state-space.\n\nReturns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.\n\nExamples\n\nevaluate(V, Dict(:volume => 1.0))\n\nIf the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:\n\nevaluate(V, Dict(Symbol(\"volume[1]\") => 1.0))\n\nYou can also use strings or symbols for the keys.\n\nevaluate(V, Dict(\"volume[1]\" => 1))\n\n\n\n\n\n","category":"method"},{"location":"apireference/#SDDP.plot","page":"API Reference","title":"SDDP.plot","text":"plot(plt::SpaghettiPlot[, filename::String]; open::Bool = true)\n\nThe SpaghettiPlot plot plt to filename. If filename is not given, it will be saved to a temporary directory. If open = true, then a browser window will be opened to display the resulting HTML file.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#Debugging-the-model","page":"API Reference","title":"Debugging the model","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.write_subproblem_to_file\nSDDP.deterministic_equivalent","category":"page"},{"location":"apireference/#SDDP.write_subproblem_to_file","page":"API Reference","title":"SDDP.write_subproblem_to_file","text":"write_subproblem_to_file(\n node::Node,\n filename::String;\n throw_error::Bool = false,\n)\n\nWrite the subproblem contained in node to the file filename.\n\nThe throw_error is an argument used internally by SDDP.jl. If set, an error will be thrown.\n\nExample\n\nSDDP.write_subproblem_to_file(model[1], \"subproblem_1.lp\")\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.deterministic_equivalent","page":"API Reference","title":"SDDP.deterministic_equivalent","text":"deterministic_equivalent(\n pg::PolicyGraph{T},\n optimizer = nothing;\n time_limit::Union{Real,Nothing} = 60.0,\n)\n\nForm a JuMP model that represents the deterministic equivalent of the problem.\n\nExamples\n\ndeterministic_equivalent(model)\n\ndeterministic_equivalent(model, HiGHS.Optimizer)\n\n\n\n\n\n","category":"function"},{"location":"apireference/#StochOptFormat","page":"API Reference","title":"StochOptFormat","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.write_to_file\nSDDP.read_from_file\nBase.write(::IO, ::SDDP.PolicyGraph)\nBase.read(::IO, ::Type{SDDP.PolicyGraph})\nSDDP.evaluate(::SDDP.PolicyGraph{T}, ::SDDP.ValidationScenarios{T}) where {T}\nSDDP.ValidationScenarios\nSDDP.ValidationScenario","category":"page"},{"location":"apireference/#SDDP.write_to_file","page":"API Reference","title":"SDDP.write_to_file","text":"write_to_file(\n model::PolicyGraph,\n filename::String;\n compression::MOI.FileFormats.AbstractCompressionScheme =\n MOI.FileFormats.AutomaticCompression(),\n kwargs...\n)\n\nWrite model to filename in the StochOptFormat file format.\n\nPass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.\n\nSee Base.write(::IO, ::PolicyGraph) for information on the keyword arguments that can be provided.\n\nwarning: Warning\nThis function is experimental. See the full warning in Base.write(::IO, ::PolicyGraph).\n\nExamples\n\nwrite_to_file(model, \"my_model.sof.json\"; validation_scenarios = 10)\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.read_from_file","page":"API Reference","title":"SDDP.read_from_file","text":"read_from_file(\n filename::String;\n compression::MOI.FileFormats.AbstractCompressionScheme =\n MOI.FileFormats.AutomaticCompression(),\n kwargs...\n)::Tuple{PolicyGraph, ValidationScenarios}\n\nReturn a tuple containing a PolicyGraph object and a ValidationScenarios read from filename in the StochOptFormat file format.\n\nPass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.\n\nSee Base.read(::IO, ::Type{PolicyGraph}) for information on the keyword arguments that can be provided.\n\nwarning: Warning\nThis function is experimental. See the full warning in Base.read(::IO, ::Type{PolicyGraph}).\n\nExamples\n\nmodel, validation_scenarios = read_from_file(\"my_model.sof.json\")\n\n\n\n\n\n","category":"function"},{"location":"apireference/#Base.write-Tuple{IO, SDDP.PolicyGraph}","page":"API Reference","title":"Base.write","text":"Base.write(\n io::IO,\n model::PolicyGraph;\n validation_scenarios::Union{Nothing,Int,ValidationScenarios} = nothing,\n sampling_scheme::AbstractSamplingScheme = InSampleMonteCarlo(),\n kwargs...\n)\n\nWrite model to io in the StochOptFormat file format.\n\nPass an Int to validation_scenarios (default nothing) to specify the number of test scenarios to generate using the sampling_scheme sampling scheme. Alternatively, pass a ValidationScenarios object to manually specify the test scenarios to use.\n\nAny additional kwargs passed to write will be stored in the top-level of the resulting StochOptFormat file. Valid arguments include name, author, date, and description.\n\nCompatibility\n\nwarning: Warning\nTHIS FUNCTION IS EXPERIMENTAL. THINGS MAY CHANGE BETWEEN COMMITS. YOU SHOULD NOT RELY ON THIS FUNCTIONALITY AS A LONG-TERM FILE FORMAT (YET).\n\nIn addition to potential changes to the underlying format, only a subset of possible modifications are supported. These include:\n\nJuMP.fix\nJuMP.set_lower_bound\nJuMP.set_upper_bound\nJuMP.set_normalized_rhs\nChanges to the constant or affine terms in a stage objective.\n\nIf your model uses something other than this, this function will silently write an incorrect formulation of the problem.\n\nExamples\n\nopen(\"my_model.sof.json\", \"w\") do io\n write(\n io,\n model;\n validation_scenarios = 10,\n name = \"MyModel\",\n author = \"@odow\",\n date = \"2020-07-20\",\n description = \"Example problem for the SDDP.jl documentation\",\n )\nend\n\n\n\n\n\n","category":"method"},{"location":"apireference/#Base.read-Tuple{IO, Type{SDDP.PolicyGraph}}","page":"API Reference","title":"Base.read","text":"Base.read(\n io::IO,\n ::Type{PolicyGraph};\n bound::Float64 = 1e6,\n)::Tuple{PolicyGraph,ValidationScenarios}\n\nReturn a tuple containing a PolicyGraph object and a ValidationScenarios read from io in the StochOptFormat file format.\n\nSee also: evaluate.\n\nCompatibility\n\nwarning: Warning\nThis function is experimental. Things may change between commits. You should not rely on this functionality as a long-term file format (yet).\n\nIn addition to potential changes to the underlying format, only a subset of possible modifications are supported. These include:\n\nAdditive random variables in the constraints or in the objective\nMultiplicative random variables in the objective\n\nIf your model uses something other than this, this function may throw an error or silently build a non-convex model.\n\nExamples\n\nopen(\"my_model.sof.json\", \"r\") do io\n model, validation_scenarios = read(io, PolicyGraph)\nend\n\n\n\n\n\n","category":"method"},{"location":"apireference/#SDDP.evaluate-Union{Tuple{T}, Tuple{SDDP.PolicyGraph{T}, SDDP.ValidationScenarios{T}}} where T","page":"API Reference","title":"SDDP.evaluate","text":"evaluate(\n model::PolicyGraph{T},\n validation_scenarios::ValidationScenarios{T,S},\n) where {T,S}\n\nEvaluate the performance of the policy contained in model after a call to train on the scenarios specified by validation_scenarios.\n\nExamples\n\nmodel, validation_scenarios = read_from_file(\"my_model.sof.json\")\ntrain(model; iteration_limit = 100)\nsimulations = evaluate(model, validation_scenarios)\n\n\n\n\n\n","category":"method"},{"location":"apireference/#SDDP.ValidationScenarios","page":"API Reference","title":"SDDP.ValidationScenarios","text":"ValidationScenario{T,S}(scenarios::Vector{ValidationScenario{T,S}})\n\nAn AbstractSamplingScheme based on a vector of scenarios.\n\nEach scenario is a vector of Tuple{T, S} where the first element is the node to visit and the second element is the realization of the stagewise-independent noise term. Pass nothing if the node is deterministic.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.ValidationScenario","page":"API Reference","title":"SDDP.ValidationScenario","text":"ValidationScenario{T,S}(scenario::Vector{Tuple{T,S}})\n\nA single scenario for testing.\n\nSee also: ValidationScenarios.\n\n\n\n\n\n","category":"type"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"EditURL = \"markov_uncertainty.jl\"","category":"page"},{"location":"tutorial/markov_uncertainty/#Markovian-policy-graphs","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"","category":"section"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"In our previous tutorials (An introduction to SDDP.jl and Uncertainty in the objective function), we formulated a simple hydrothermal scheduling problem with stagewise-independent random variables in the right-hand side of the constraints and in the objective function. Now, in this tutorial, we introduce some stagewise-dependent uncertainty using a Markov chain.","category":"page"},{"location":"tutorial/markov_uncertainty/#Formulating-the-problem","page":"Markovian policy graphs","title":"Formulating the problem","text":"","category":"section"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"In this tutorial we consider a Markov chain with two climate states: wet and dry. Each Markov state is associated with an integer, in this case the wet climate state is Markov state 1 and the dry climate state is Markov state 2. In the wet climate state, the probability of the high inflow increases to 50%, and the probability of the low inflow decreases to 1/6. In the dry climate state, the converse happens. There is also persistence in the climate state: the probability of remaining in the current state is 75%, and the probability of transitioning to the other climate state is 25%. We assume that the first stage starts in the wet climate state.","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"Here is a picture of the model we're going to implement.","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"(Image: Markovian policy graph)","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"There are five nodes in our graph. Each node is named by a tuple (t, i), where t is the stage for t=1,2,3, and i is the Markov state for i=1,2. As before, the wavy lines denote the stagewise-independent random variable.","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"For each stage, we need to provide a Markov transition matrix. This is an MxN matrix, where the element A[i, j] gives the probability of transitioning from Markov state i in the previous stage to Markov state j in the current stage. The first stage is special because we assume there is a \"zero'th\" stage which has one Markov state (the round node in the graph above). Furthermore, the number of columns in the transition matrix of a stage (i.e. the number of Markov states) must equal the number of rows in the next stage's transition matrix. For our example, the vector of Markov transition matrices is given by:","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"T = Array{Float64,2}[[1.0]', [0.75 0.25], [0.75 0.25; 0.25 0.75]]","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"note: Note\nMake sure to add the ' after the first transition matrix so Julia can distinguish between a vector and a matrix.","category":"page"},{"location":"tutorial/markov_uncertainty/#Creating-a-model","page":"Markovian policy graphs","title":"Creating a model","text":"","category":"section"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"using SDDP, HiGHS\n\nΩ = [\n (inflow = 0.0, fuel_multiplier = 1.5),\n (inflow = 50.0, fuel_multiplier = 1.0),\n (inflow = 100.0, fuel_multiplier = 0.75),\n]\n\nmodel = SDDP.MarkovianPolicyGraph(;\n transition_matrices = Array{Float64,2}[\n [1.0]',\n [0.75 0.25],\n [0.75 0.25; 0.25 0.75],\n ],\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, node\n # Unpack the stage and Markov index.\n t, markov_state = node\n # Define the state variable.\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Define the control variables.\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n # Define the constraints\n @constraints(\n subproblem,\n begin\n volume.out == volume.in + inflow - hydro_generation - hydro_spill\n thermal_generation + hydro_generation == 150.0\n end\n )\n # Note how we can use `markov_state` to dispatch an `if` statement.\n probability = if markov_state == 1 # wet climate state\n [1 / 6, 1 / 3, 1 / 2]\n else # dry climate state\n [1 / 2, 1 / 3, 1 / 6]\n end\n\n fuel_cost = [50.0, 100.0, 150.0]\n SDDP.parameterize(subproblem, Ω, probability) do ω\n JuMP.fix(inflow, ω.inflow)\n @stageobjective(\n subproblem,\n ω.fuel_multiplier * fuel_cost[t] * thermal_generation\n )\n end\nend","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"tip: Tip\nFor more information on SDDP.MarkovianPolicyGraphs, read Create a general policy graph.","category":"page"},{"location":"tutorial/markov_uncertainty/#Training-and-simulating-the-policy","page":"Markovian policy graphs","title":"Training and simulating the policy","text":"","category":"section"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"As in the previous three tutorials, we train the policy:","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"SDDP.train(model)","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"Instead of performing a Monte Carlo simulation like the previous tutorials, we may want to simulate one particular sequence of noise realizations. This historical simulation can also be conducted by passing a SDDP.Historical sampling scheme to the sampling_scheme keyword of the SDDP.simulate function.","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"We can confirm that the historical sequence of nodes was visited by querying the :node_index key of the simulation results.","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"simulations = SDDP.simulate(\n model;\n sampling_scheme = SDDP.Historical([\n ((1, 1), Ω[1]),\n ((2, 2), Ω[3]),\n ((3, 1), Ω[2]),\n ]),\n)\n\n[stage[:node_index] for stage in simulations[1]]","category":"page"},{"location":"examples/FAST_hydro_thermal/","page":"FAST: the hydro-thermal problem","title":"FAST: the hydro-thermal problem","text":"EditURL = \"FAST_hydro_thermal.jl\"","category":"page"},{"location":"examples/FAST_hydro_thermal/#FAST:-the-hydro-thermal-problem","page":"FAST: the hydro-thermal problem","title":"FAST: the hydro-thermal problem","text":"","category":"section"},{"location":"examples/FAST_hydro_thermal/","page":"FAST: the hydro-thermal problem","title":"FAST: the hydro-thermal problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/FAST_hydro_thermal/","page":"FAST: the hydro-thermal problem","title":"FAST: the hydro-thermal problem","text":"An implementation of the Hydro-thermal example from FAST","category":"page"},{"location":"examples/FAST_hydro_thermal/","page":"FAST: the hydro-thermal problem","title":"FAST: the hydro-thermal problem","text":"using SDDP, HiGHS, Test\n\nfunction fast_hydro_thermal()\n model = SDDP.LinearPolicyGraph(;\n stages = 2,\n upper_bound = 0.0,\n sense = :Max,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n @variable(sp, 0 <= x <= 8, SDDP.State, initial_value = 0.0)\n @variables(sp, begin\n y >= 0\n p >= 0\n ξ\n end)\n @constraints(sp, begin\n p + y >= 6\n x.out <= x.in - y + ξ\n end)\n RAINFALL = (t == 1 ? [6] : [2, 10])\n SDDP.parameterize(sp, RAINFALL) do ω\n return JuMP.fix(ξ, ω)\n end\n @stageobjective(sp, -5 * p)\n end\n\n det = SDDP.deterministic_equivalent(model, HiGHS.Optimizer)\n set_silent(det)\n JuMP.optimize!(det)\n @test JuMP.objective_sense(det) == MOI.MAX_SENSE\n @test JuMP.objective_value(det) == -10\n SDDP.train(model)\n @test SDDP.calculate_bound(model) == -10\n return\nend\n\nfast_hydro_thermal()","category":"page"},{"location":"examples/StochDynamicProgramming.jl_multistock/","page":"StochDynamicProgramming: the multistock problem","title":"StochDynamicProgramming: the multistock problem","text":"EditURL = \"StochDynamicProgramming.jl_multistock.jl\"","category":"page"},{"location":"examples/StochDynamicProgramming.jl_multistock/#StochDynamicProgramming:-the-multistock-problem","page":"StochDynamicProgramming: the multistock problem","title":"StochDynamicProgramming: the multistock problem","text":"","category":"section"},{"location":"examples/StochDynamicProgramming.jl_multistock/","page":"StochDynamicProgramming: the multistock problem","title":"StochDynamicProgramming: the multistock problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/StochDynamicProgramming.jl_multistock/","page":"StochDynamicProgramming: the multistock problem","title":"StochDynamicProgramming: the multistock problem","text":"This example comes from StochDynamicProgramming.jl.","category":"page"},{"location":"examples/StochDynamicProgramming.jl_multistock/","page":"StochDynamicProgramming: the multistock problem","title":"StochDynamicProgramming: the multistock problem","text":"using SDDP, HiGHS, Test\n\nfunction test_multistock_example()\n model = SDDP.LinearPolicyGraph(;\n stages = 5,\n lower_bound = -5.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, stage\n @variable(\n subproblem,\n 0 <= stock[i = 1:3] <= 1,\n SDDP.State,\n initial_value = 0.5\n )\n @variables(subproblem, begin\n 0 <= control[i = 1:3] <= 0.5\n ξ[i = 1:3] # Dummy for RHS noise.\n end)\n @constraints(\n subproblem,\n begin\n sum(control) - 0.5 * 3 <= 0\n [i = 1:3], stock[i].out == stock[i].in + control[i] - ξ[i]\n end\n )\n Ξ = collect(\n Base.product((0.0, 0.15, 0.3), (0.0, 0.15, 0.3), (0.0, 0.15, 0.3)),\n )[:]\n SDDP.parameterize(subproblem, Ξ) do ω\n return JuMP.fix.(ξ, ω)\n end\n @stageobjective(subproblem, (sin(3 * stage) - 1) * sum(control))\n end\n SDDP.train(\n model;\n iteration_limit = 100,\n cut_type = SDDP.SINGLE_CUT,\n log_frequency = 10,\n )\n @test SDDP.calculate_bound(model) ≈ -4.349 atol = 0.01\n\n simulation_results = SDDP.simulate(model, 5000)\n @test length(simulation_results) == 5000\n μ = SDDP.Statistics.mean(\n sum(data[:stage_objective] for data in simulation) for\n simulation in simulation_results\n )\n @test μ ≈ -4.349 atol = 0.1\n return\nend\n\ntest_multistock_example()","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"EditURL = \"plotting.jl\"","category":"page"},{"location":"tutorial/plotting/#Plotting-tools","page":"Plotting tools","title":"Plotting tools","text":"","category":"section"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"In our previous tutorials, we formulated, solved, and simulated multistage stochastic optimization problems. However, we haven't really investigated what the solution looks like. Luckily, SDDP.jl includes a number of plotting tools to help us do that. In this tutorial, we explain the tools and make some pretty pictures.","category":"page"},{"location":"tutorial/plotting/#Preliminaries","page":"Plotting tools","title":"Preliminaries","text":"","category":"section"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"The next two plot types help visualize the policy. Thus, we first need to create a policy and simulate some trajectories. So, let's take the model from Markovian policy graphs, train it for 20 iterations, and then simulate 100 Monte Carlo realizations of the policy.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"using SDDP, HiGHS\n\nΩ = [\n (inflow = 0.0, fuel_multiplier = 1.5),\n (inflow = 50.0, fuel_multiplier = 1.0),\n (inflow = 100.0, fuel_multiplier = 0.75),\n]\n\nmodel = SDDP.MarkovianPolicyGraph(;\n transition_matrices = Array{Float64,2}[\n [1.0]',\n [0.75 0.25],\n [0.75 0.25; 0.25 0.75],\n ],\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, node\n t, markov_state = node\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n @constraints(\n subproblem,\n begin\n volume.out == volume.in + inflow - hydro_generation - hydro_spill\n thermal_generation + hydro_generation == 150.0\n end\n )\n probability =\n markov_state == 1 ? [1 / 6, 1 / 3, 1 / 2] : [1 / 2, 1 / 3, 1 / 6]\n fuel_cost = [50.0, 100.0, 150.0]\n SDDP.parameterize(subproblem, Ω, probability) do ω\n JuMP.fix(inflow, ω.inflow)\n @stageobjective(\n subproblem,\n ω.fuel_multiplier * fuel_cost[t] * thermal_generation\n )\n end\nend\n\nSDDP.train(model; iteration_limit = 20, run_numerical_stability_report = false)\n\nsimulations = SDDP.simulate(\n model,\n 100,\n [:volume, :thermal_generation, :hydro_generation, :hydro_spill],\n)\n\nprintln(\"Completed $(length(simulations)) simulations.\")","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"Great! Now we have some data in simulations to visualize.","category":"page"},{"location":"tutorial/plotting/#Spaghetti-plots","page":"Plotting tools","title":"Spaghetti plots","text":"","category":"section"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"The first plotting utility we discuss is a spaghetti plot (you'll understand the name when you see the graph).","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"To create a spaghetti plot, begin by creating a new SDDP.SpaghettiPlot instance as follows:","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"plt = SDDP.SpaghettiPlot(simulations)","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"We can add plots to plt using the SDDP.add_spaghetti function.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.add_spaghetti(plt; title = \"Reservoir volume\") do data\n return data[:volume].out\nend","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"In addition to returning values from the simulation, you can compute things:","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.add_spaghetti(plt; title = \"Fuel cost\", ymin = 0, ymax = 250) do data\n if data[:thermal_generation] > 0\n return data[:stage_objective] / data[:thermal_generation]\n else # No thermal generation, so return 0.0.\n return 0.0\n end\nend","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"Note that there are many keyword arguments in addition to title. For example, we fixed the minimum and maximum values of the y-axis using ymin and ymax. See the SDDP.add_spaghetti documentation for all the arguments.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"Having built the plot, we now need to display it using SDDP.plot.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.plot(plt, \"spaghetti_plot.html\")","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"This should open a webpage that looks like this one.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"Using the mouse, you can highlight individual trajectories by hovering over them. This makes it possible to visualize a single trajectory across multiple dimensions.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"If you click on the plot, then trajectories that are close to the mouse pointer are shown darker and those further away are shown lighter.","category":"page"},{"location":"tutorial/plotting/#Publication-plots","page":"Plotting tools","title":"Publication plots","text":"","category":"section"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"Instead of the interactive Javascript plots, you can also create some publication ready plots using the SDDP.publication_plot function.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"info: Info\nYou need to install the Plots.jl package for this to work. We used the GR backend (gr()), but any Plots.jl backend should work.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.publication_plot implements a plot recipe to create ribbon plots of each variable against the stages. The first argument is the vector of simulation dictionaries and the second argument is the dictionary key that you want to plot. Standard Plots.jl keyword arguments such as title and xlabel can be used to modify the look of each plot. By default, the plot displays ribbons of the 0-100, 10-90, and 25-75 percentiles. The dark, solid line in the middle is the median (i.e. 50'th percentile).","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"import Plots\nPlots.plot(\n SDDP.publication_plot(simulations; title = \"Outgoing volume\") do data\n return data[:volume].out\n end,\n SDDP.publication_plot(simulations; title = \"Thermal generation\") do data\n return data[:thermal_generation]\n end;\n xlabel = \"Stage\",\n ylims = (0, 200),\n layout = (1, 2),\n)","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"You can save this plot as a PDF using the Plots.jl function savefig:","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"Plots.savefig(\"my_picture.pdf\")","category":"page"},{"location":"tutorial/plotting/#Plotting-the-value-function","page":"Plotting tools","title":"Plotting the value function","text":"","category":"section"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"You can obtain an object representing the value function of a node using SDDP.ValueFunction.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"V = SDDP.ValueFunction(model[(1, 1)])","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"The value function can be evaluated using SDDP.evaluate.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.evaluate(V; volume = 1)","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"evaluate returns the height of the value function, and a subgradient with respect to the convex state variables.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"You can also plot the value function using SDDP.plot","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.plot(V, volume = 0:200, filename = \"value_function.html\")","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"This should open a webpage that looks like this one.","category":"page"},{"location":"tutorial/plotting/#Convergence-dashboard","page":"Plotting tools","title":"Convergence dashboard","text":"","category":"section"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"If the text-based logging isn't to your liking, you can open a visualization of the training by passing dashboard = true to SDDP.train.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.train(model; dashboard = true)","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"By default, dashboard = false because there is an initial overhead associated with opening and preparing the plot.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"warning: Warning\nThe dashboard is experimental. There are known bugs associated with it, e.g., SDDP.jl#226.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"EditURL = \"the_farmers_problem.jl\"","category":"page"},{"location":"examples/the_farmers_problem/#The-farmer's-problem","page":"The farmer's problem","title":"The farmer's problem","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"This problem is taken from Section 1.1 of the book Birge, J. R., & Louveaux, F. (2011). Introduction to Stochastic Programming. New York, NY: Springer New York. Paragraphs in quotes are taken verbatim.","category":"page"},{"location":"examples/the_farmers_problem/#Problem-description","page":"The farmer's problem","title":"Problem description","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Consider a European farmer who specializes in raising wheat, corn, and sugar beets on his 500 acres of land. During the winter, [they want] to decide how much land to devote to each crop.The farmer knows that at least 200 tons (T) of wheat and 240 T of corn are needed for cattle feed. These amounts can be raised on the farm or bought from a wholesaler. Any production in excess of the feeding requirement would be sold.Over the last decade, mean selling prices have been $170 and $150 per ton of wheat and corn, respectively. The purchase prices are 40% more than this due to the wholesaler’s margin and transportation costs.Another profitable crop is sugar beet, which [they expect] to sell at $36/T; however, the European Commission imposes a quota on sugar beet production. Any amount in excess of the quota can be sold only at $10/T. The farmer’s quota for next year is 6000 T.\"Based on past experience, the farmer knows that the mean yield on [their] land is roughly 2.5 T, 3 T, and 20 T per acre for wheat, corn, and sugar beets, respectively.[To introduce uncertainty,] assume some correlation among the yields of the different crops. A very simplified representation of this would be to assume that years are good, fair, or bad for all crops, resulting in above average, average, or below average yields for all crops. To fix these ideas, above and below average indicate a yield 20% above or below the mean yield.","category":"page"},{"location":"examples/the_farmers_problem/#Problem-data","page":"The farmer's problem","title":"Problem data","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"The area of the farm.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"MAX_AREA = 500.0","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"There are three crops:","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"CROPS = [:wheat, :corn, :sugar_beet]","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Each of the crops has a different planting cost ($/acre).","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"PLANTING_COST = Dict(:wheat => 150.0, :corn => 230.0, :sugar_beet => 260.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"The farmer requires a minimum quantity of wheat and corn, but not of sugar beet (tonnes).","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"MIN_QUANTITIES = Dict(:wheat => 200.0, :corn => 240.0, :sugar_beet => 0.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"In Europe, there is a quota system for producing crops. The farmer owns the following quota for each crop (tonnes):","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"QUOTA_MAX = Dict(:wheat => Inf, :corn => Inf, :sugar_beet => 6_000.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"The farmer can sell crops produced under the quota for the following amounts ($/tonne):","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"SELL_IN_QUOTA = Dict(:wheat => 170.0, :corn => 150.0, :sugar_beet => 36.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"If they sell more than their allotted quota, the farmer earns the following on each tonne of crop above the quota ($/tonne):","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"SELL_NO_QUOTA = Dict(:wheat => 0.0, :corn => 0.0, :sugar_beet => 10.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"The purchase prices for wheat and corn are 40% more than their sales price. However, the description does not address the purchase price of sugar beet. Therefore, we use a large value of $1,000/tonne.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"BUY_PRICE = Dict(:wheat => 238.0, :corn => 210.0, :sugar_beet => 1_000.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"On average, each crop has the following yield in tonnes/acre:","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"MEAN_YIELD = Dict(:wheat => 2.5, :corn => 3.0, :sugar_beet => 20.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"However, the yield is random. In good years, the yield is +20% above average, and in bad years, the yield is -20% below average.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"YIELD_MULTIPLIER = Dict(:good => 1.2, :fair => 1.0, :bad => 0.8)","category":"page"},{"location":"examples/the_farmers_problem/#Mathematical-formulation","page":"The farmer's problem","title":"Mathematical formulation","text":"","category":"section"},{"location":"examples/the_farmers_problem/#SDDP.jl-code","page":"The farmer's problem","title":"SDDP.jl code","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"note: Note\nIn what follows, we make heavy use of the fact that you can look up variables by their symbol name in a JuMP model as follows:@variable(model, x)\nmodel[:x]Read the JuMP documentation if this isn't familiar to you.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"First up, load SDDP.jl and a solver. For this example, we use HiGHS.jl.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"using SDDP, HiGHS","category":"page"},{"location":"examples/the_farmers_problem/#State-variables","page":"The farmer's problem","title":"State variables","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"State variables are the information that flows between stages. In our example, the state variables are the areas of land devoted to growing each crop.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"function add_state_variables(subproblem)\n @variable(subproblem, area[c = CROPS] >= 0, SDDP.State, initial_value = 0)\nend","category":"page"},{"location":"examples/the_farmers_problem/#First-stage-problem","page":"The farmer's problem","title":"First stage problem","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"We can only plant a maximum of 500 acres, and we want to minimize the planting cost","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"function create_first_stage_problem(subproblem)\n @constraint(\n subproblem,\n sum(subproblem[:area][c].out for c in CROPS) <= MAX_AREA\n )\n @stageobjective(\n subproblem,\n -sum(PLANTING_COST[c] * subproblem[:area][c].out for c in CROPS)\n )\nend","category":"page"},{"location":"examples/the_farmers_problem/#Second-stage-problem","page":"The farmer's problem","title":"Second stage problem","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Now let's consider the second stage problem. This is more complicated than the first stage, so we've broken it down into four sections:","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"control variables\nconstraints\nthe objective\nthe uncertainty","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"First, let's add the second stage control variables.","category":"page"},{"location":"examples/the_farmers_problem/#Variables","page":"The farmer's problem","title":"Variables","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"We add four types of control variables. Technically, the yield isn't a control variable. However, we add it as a dummy \"helper\" variable because it will be used when we add uncertainty.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"function second_stage_variables(subproblem)\n @variables(subproblem, begin\n 0 <= yield[c = CROPS] # tonnes/acre\n 0 <= buy[c = CROPS] # tonnes\n 0 <= sell_in_quota[c = CROPS] <= QUOTA_MAX[c] # tonnes\n 0 <= sell_no_quota[c = CROPS] # tonnes\n end)\nend","category":"page"},{"location":"examples/the_farmers_problem/#Constraints","page":"The farmer's problem","title":"Constraints","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"We need to define is the minimum quantity constraint. This ensures that MIN_QUANTITIES[c] of each crop is produced.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"function second_stage_constraint_min_quantity(subproblem)\n @constraint(\n subproblem,\n [c = CROPS],\n subproblem[:yield][c] + subproblem[:buy][c] -\n subproblem[:sell_in_quota][c] - subproblem[:sell_no_quota][c] >=\n MIN_QUANTITIES[c]\n )\nend","category":"page"},{"location":"examples/the_farmers_problem/#Objective","page":"The farmer's problem","title":"Objective","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"The objective of the second stage is to maximise revenue from selling crops, less the cost of buying corn and wheat if necessary to meet the minimum quantity constraint.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"function second_stage_objective(subproblem)\n @stageobjective(\n subproblem,\n sum(\n SELL_IN_QUOTA[c] * subproblem[:sell_in_quota][c] +\n SELL_NO_QUOTA[c] * subproblem[:sell_no_quota][c] -\n BUY_PRICE[c] * subproblem[:buy][c] for c in CROPS\n )\n )\nend","category":"page"},{"location":"examples/the_farmers_problem/#Random-variables","page":"The farmer's problem","title":"Random variables","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Then, in the SDDP.parameterize function, we set the coefficient using JuMP.set_normalized_coefficient.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"function second_stage_uncertainty(subproblem)\n @constraint(\n subproblem,\n uncertainty[c = CROPS],\n 1.0 * subproblem[:area][c].in == subproblem[:yield][c]\n )\n SDDP.parameterize(subproblem, [:good, :fair, :bad]) do ω\n for c in CROPS\n JuMP.set_normalized_coefficient(\n uncertainty[c],\n subproblem[:area][c].in,\n MEAN_YIELD[c] * YIELD_MULTIPLIER[ω],\n )\n end\n end\nend","category":"page"},{"location":"examples/the_farmers_problem/#Putting-it-all-together","page":"The farmer's problem","title":"Putting it all together","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Now we're ready to build the multistage stochastic programming model. In addition to the things already discussed, we need a few extra pieces of information.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"First, we are maximizing, so we set sense = :Max. Second, we need to provide a valid upper bound. (See Choosing an initial bound for more on this.) We know from Birge and Louveaux that the optimal solution is $108,390. So, let's choose $500,000 just to be safe.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Here is the full model.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"model = SDDP.LinearPolicyGraph(;\n stages = 2,\n sense = :Max,\n upper_bound = 500_000.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, stage\n add_state_variables(subproblem)\n if stage == 1\n create_first_stage_problem(subproblem)\n else\n second_stage_variables(subproblem)\n second_stage_constraint_min_quantity(subproblem)\n second_stage_uncertainty(subproblem)\n second_stage_objective(subproblem)\n end\nend","category":"page"},{"location":"examples/the_farmers_problem/#Training-a-policy","page":"The farmer's problem","title":"Training a policy","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Now that we've built a model, we need to train it using SDDP.train. The keyword iteration_limit stops the training after 40 iterations. See Choose a stopping rule for other ways to stop the training.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"SDDP.train(model; iteration_limit = 40)","category":"page"},{"location":"examples/the_farmers_problem/#Checking-the-policy","page":"The farmer's problem","title":"Checking the policy","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Birge and Louveaux report that the optimal objective value is $108,390. Check that we got the correct solution using SDDP.calculate_bound:","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"@assert isapprox(SDDP.calculate_bound(model), 108_390.0, atol = 0.1)","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"EditURL = \"warnings.jl\"","category":"page"},{"location":"tutorial/warnings/#Words-of-warning","page":"Words of warning","title":"Words of warning","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"SDDP is a powerful solution technique for multistage stochastic programming. However, there are a number of subtle things to be aware of before creating your own models.","category":"page"},{"location":"tutorial/warnings/#Relatively-complete-recourse","page":"Words of warning","title":"Relatively complete recourse","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"Models built in SDDP.jl need a property called relatively complete recourse.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"One definition of relatively complete recourse is that all feasible decisions (not necessarily optimal) in a subproblem lead to feasible decisions in future subproblems.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"For example, in the following problem, one feasible first stage decision is x.out = 0. But this causes an infeasibility in the second stage which requires x.in >= 1. This will throw an error about infeasibility if you try to solve.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"using SDDP, HiGHS\n\nmodel = SDDP.LinearPolicyGraph(;\n stages = 2,\n lower_bound = 0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(sp, x >= 0, SDDP.State, initial_value = 1)\n if t == 2\n @constraint(sp, x.in >= 1)\n end\n @stageobjective(sp, x.out)\nend\n\ntry #hide\n SDDP.train(model; iteration_limit = 1, print_level = 0)\ncatch err #hide\n showerror(stderr, err) #hide\nend #hide","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"warning: Warning\nThe actual constraints causing the infeasibilities can be deceptive! A good strategy to debug is to comment out all constraints. Then, one-by-one, un-comment the constraints and try resolving the model to check if it finds a feasible solution.","category":"page"},{"location":"tutorial/warnings/#Numerical-stability","page":"Words of warning","title":"Numerical stability","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"If you aren't aware, SDDP builds an outer-approximation to a convex function using cutting planes. This results in a formulation that is particularly hard for solvers like HiGHS, Gurobi, and CPLEX to deal with. As a result, you may run into weird behavior. This behavior could include:","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"Iterations suddenly taking a long time (the solver stalled)\nSubproblems turning infeasible or unbounded after many iterations\nSolvers returning \"Numerical Error\" statuses","category":"page"},{"location":"tutorial/warnings/#Problem-scaling","page":"Words of warning","title":"Problem scaling","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"In almost all cases, the cause of this is poor problem scaling. For our purpose, poor problem scaling means having variables with very large numbers and variables with very small numbers in the same model.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"tip: Tip\nGurobi has an excellent set of articles on numerical issues and how to avoid them.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"Consider, for example, the hydro-thermal scheduling problem we have been discussing in previous tutorials.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"If we define the volume of the reservoir in terms of m³, then a lake might have a capacity of 10^10 m³: @variable(subproblem, 0 <= volume <= 10^10). Moreover, the cost per cubic meter might be around $0.05/m³. To calculate the value of water in our reservoir, we need to multiple a variable on the order of 10^10, by one on the order of 10⁻²! That is twelve orders of magnitude!","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"To improve the performance of the SDDP algorithm (and reduce the chance of weird behavior), try to re-scale the units of the problem in order to reduce the largest difference in magnitude. For example, if we talk in terms of million m³, then we have a capacity of 10⁴ million m³, and a price of $50,000 per million m³. Now things are only one order of magnitude apart.","category":"page"},{"location":"tutorial/warnings/#Numerical-stability-report","page":"Words of warning","title":"Numerical stability report","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"To aid in the diagnose of numerical issues, you can call SDDP.numerical_stability_report. By default, this aggregates all of the nodes into a single report. You can produce a stability report for each node by passing by_node=true.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"using SDDP\n\nmodel =\n SDDP.LinearPolicyGraph(; stages = 2, lower_bound = -1e10) do subproblem, t\n @variable(subproblem, x >= -1e7, SDDP.State, initial_value = 1e-5)\n @constraint(subproblem, 1e9 * x.out >= 1e-6 * x.in + 1e-8)\n @stageobjective(subproblem, 1e9 * x.out)\n end\n\nSDDP.numerical_stability_report(model)","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"The report analyses the magnitude (in absolute terms) of the coefficients in the constraint matrix, the objective function, any variable bounds, and in the RHS of the constraints. A warning will be thrown in SDDP.jl detects very large or small values. As discussed in Problem scaling, this is an indication that you should reformulate your model.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"By default, a numerical stability check is run when you call SDDP.train, although it can be turned off by passing run_numerical_stability_report = false.","category":"page"},{"location":"tutorial/warnings/#Solver-specific-options","page":"Words of warning","title":"Solver-specific options","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"If you have a particularly troublesome model, you should investigate setting solver-specific options to improve the numerical stability of each solver. For example, Gurobi has a NumericFocus option.","category":"page"},{"location":"tutorial/warnings/#Choosing-an-initial-bound","page":"Words of warning","title":"Choosing an initial bound","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"One of the important requirements when building a SDDP model is to choose an appropriate bound on the objective (lower if minimizing, upper if maximizing). However, it can be hard to choose a bound if you don't know the solution! (Which is very likely.)","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"The bound should not be as large as possible (since this will help with convergence and the numerical issues discussed above), but if chosen too small, it may cut off the feasible region and lead to a sub-optimal solution.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"Consider the following simple model, where we first set lower_bound to 0.0.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"using SDDP, HiGHS\n\nmodel = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, t\n @variable(subproblem, x >= 0, SDDP.State, initial_value = 2)\n @variable(subproblem, u >= 0)\n @variable(subproblem, v >= 0)\n @constraint(subproblem, x.out == x.in - u)\n @constraint(subproblem, u + v == 1.5)\n @stageobjective(subproblem, t * v)\nend\n\nSDDP.train(model; iteration_limit = 5, run_numerical_stability_report = false)","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"Now consider the case when we set the lower_bound to 10.0:","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"using SDDP, HiGHS\n\nmodel = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 10.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, t\n @variable(subproblem, x >= 0, SDDP.State, initial_value = 2)\n @variable(subproblem, u >= 0)\n @variable(subproblem, v >= 0)\n @constraint(subproblem, x.out == x.in - u)\n @constraint(subproblem, u + v == 1.5)\n @stageobjective(subproblem, t * v)\nend\n\nSDDP.train(model; iteration_limit = 5, run_numerical_stability_report = false)","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"How do we tell which is more appropriate? There are a few clues that you should look out for.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"The bound converges to a value above (if minimizing) the simulated cost of the policy. In this case, the problem is deterministic, so it is easy to tell. But you can also check by performing a Monte Carlo simulation like we did in An introduction to SDDP.jl.\nThe bound converges to different values when we change the bound. This is another clear give-away. The bound provided by the user is only used in the initial iterations. It should not change the value of the converged policy. Thus, if you don't know an appropriate value for the bound, choose an initial value, and then increase (or decrease) the value of the bound to confirm that the value of the policy doesn't change.\nThe bound converges to a value close to the bound provided by the user. This varies between models, but notice that 11.0 is quite close to 10.0 compared with 3.5 and 0.0.","category":"page"},{"location":"guides/add_a_multidimensional_state_variable/#Add-a-multi-dimensional-state-variable","page":"Add a multi-dimensional state variable","title":"Add a multi-dimensional state variable","text":"","category":"section"},{"location":"guides/add_a_multidimensional_state_variable/","page":"Add a multi-dimensional state variable","title":"Add a multi-dimensional state variable","text":"DocTestSetup = quote\n using SDDP, HiGHS\nend","category":"page"},{"location":"guides/add_a_multidimensional_state_variable/","page":"Add a multi-dimensional state variable","title":"Add a multi-dimensional state variable","text":"Just like normal JuMP variables, it is possible to create containers of state variables.","category":"page"},{"location":"guides/add_a_multidimensional_state_variable/","page":"Add a multi-dimensional state variable","title":"Add a multi-dimensional state variable","text":"julia> model = SDDP.LinearPolicyGraph(\n stages=1, lower_bound = 0, optimizer = HiGHS.Optimizer\n ) do subproblem, t\n # A scalar state variable.\n @variable(subproblem, x >= 0, SDDP.State, initial_value = 0)\n println(\"Lower bound of outgoing x is: \", JuMP.lower_bound(x.out))\n # A vector of state variables.\n @variable(subproblem, y[i = 1:2] >= i, SDDP.State, initial_value = i)\n println(\"Lower bound of outgoing y[1] is: \", JuMP.lower_bound(y[1].out))\n # A JuMP.Containers.DenseAxisArray of state variables.\n @variable(subproblem,\n z[i = 3:4, j = [:A, :B]] >= i, SDDP.State, initial_value = i)\n println(\"Lower bound of outgoing z[3, :B] is: \", JuMP.lower_bound(z[3, :B].out))\n end;\nLower bound of outgoing x is: 0.0\nLower bound of outgoing y[1] is: 1.0\nLower bound of outgoing z[3, :B] is: 3.0","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"EditURL = \"objective_uncertainty.jl\"","category":"page"},{"location":"tutorial/objective_uncertainty/#Uncertainty-in-the-objective-function","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"","category":"section"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"In the previous tutorial, An introduction to SDDP.jl, we created a stochastic hydro-thermal scheduling model. In this tutorial, we extend the problem by adding uncertainty to the fuel costs.","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"Previously, we assumed that the fuel cost was deterministic: $50/MWh in the first stage, $100/MWh in the second stage, and $150/MWh in the third stage. For this tutorial, we assume that in addition to these base costs, the actual fuel cost is correlated with the inflows.","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"Our new model for the uncertainty is given by the following table:","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"ω 1 2 3\nP(ω) 1/3 1/3 1/3\ninflow 0 50 100\nfuel multiplier 1.5 1.0 0.75","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"In stage t, the objective is now to minimize:","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"fuel_multiplier * fuel_cost[t] * thermal_generation","category":"page"},{"location":"tutorial/objective_uncertainty/#Creating-a-model","page":"Uncertainty in the objective function","title":"Creating a model","text":"","category":"section"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"To add an uncertain objective, we can simply call @stageobjective from inside the SDDP.parameterize function.","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"using SDDP, HiGHS\n\nmodel = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, t\n # Define the state variable.\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Define the control variables.\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n # Define the constraints\n @constraints(\n subproblem,\n begin\n volume.out == volume.in + inflow - hydro_generation - hydro_spill\n thermal_generation + hydro_generation == 150.0\n end\n )\n fuel_cost = [50.0, 100.0, 150.0]\n # Parameterize the subproblem.\n Ω = [\n (inflow = 0.0, fuel_multiplier = 1.5),\n (inflow = 50.0, fuel_multiplier = 1.0),\n (inflow = 100.0, fuel_multiplier = 0.75),\n ]\n SDDP.parameterize(subproblem, Ω, [1 / 3, 1 / 3, 1 / 3]) do ω\n JuMP.fix(inflow, ω.inflow)\n @stageobjective(\n subproblem,\n ω.fuel_multiplier * fuel_cost[t] * thermal_generation\n )\n end\nend","category":"page"},{"location":"tutorial/objective_uncertainty/#Training-and-simulating-the-policy","page":"Uncertainty in the objective function","title":"Training and simulating the policy","text":"","category":"section"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"As in the previous two tutorials, we train and simulate the policy:","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"SDDP.train(model)\n\nsimulations = SDDP.simulate(model, 500)\n\nobjective_values =\n [sum(stage[:stage_objective] for stage in sim) for sim in simulations]\n\nusing Statistics\n\nμ = round(mean(objective_values); digits = 2)\nci = round(1.96 * std(objective_values) / sqrt(500); digits = 2)\n\nprintln(\"Confidence interval: \", μ, \" ± \", ci)\nprintln(\"Lower bound: \", round(SDDP.calculate_bound(model); digits = 2))","category":"page"},{"location":"guides/add_a_risk_measure/#Add-a-risk-measure","page":"Add a risk measure","title":"Add a risk measure","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"DocTestSetup = quote\n using SDDP, HiGHS\nend","category":"page"},{"location":"guides/add_a_risk_measure/#Training-a-risk-averse-model","page":"Add a risk measure","title":"Training a risk-averse model","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.jl supports a variety of risk measures. Two common ones are SDDP.Expectation and SDDP.WorstCase. Let's see how to train a policy using them. There are three possible ways.","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"If the same risk measure is used at every node in the policy graph, we can just pass an instance of one of the risk measures to the risk_measure keyword argument of the SDDP.train function.","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.train(\n model,\n risk_measure = SDDP.WorstCase(),\n iteration_limit = 10\n)","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"However, if you want different risk measures at different nodes, there are two options. First, you can pass risk_measure a dictionary of risk measures, with one entry for each node. The keys of the dictionary are the indices of the nodes.","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.train(\n model,\n risk_measure = Dict(\n 1 => SDDP.Expectation(),\n 2 => SDDP.WorstCase()\n ),\n iteration_limit = 10\n)","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"An alternative method is to pass risk_measure a function that takes one argument, the index of a node, and returns an instance of a risk measure:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.train(\n model,\n risk_measure = (node_index) -> begin\n if node_index == 1\n return SDDP.Expectation()\n else\n return SDDP.WorstCase()\n end\n end,\n iteration_limit = 10\n)","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"note: Note\nIf you simulate the policy, the simulated value is the risk-neutral value of the policy.","category":"page"},{"location":"guides/add_a_risk_measure/#Risk-measures","page":"Add a risk measure","title":"Risk measures","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"To illustrate the risk-measures included in SDDP.jl, we consider a discrete random variable with four outcomes.","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"The random variable is supported on the values 1, 2, 3, and 4:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"noise_supports = [1, 2, 3, 4]","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"The associated probability of each outcome is as follows:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"nominal_probability = [0.1, 0.2, 0.3, 0.4]","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"With each outcome ω, the agent observes a cost Z(ω):","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"cost_realizations = [5.0, 4.0, 6.0, 2.0]","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"We assume that we are minimizing:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"is_minimization = true","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"Finally, we create a vector that will be used to store the risk-adjusted probabilities:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"risk_adjusted_probability = zeros(4)","category":"page"},{"location":"guides/add_a_risk_measure/#Expectation","page":"Add a risk measure","title":"Expectation","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.Expectation","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.Expectation","page":"Add a risk measure","title":"SDDP.Expectation","text":"Expectation()\n\nThe Expectation risk measure. Identical to taking the expectation with respect to the nominal distribution.\n\n\n\n\n\n","category":"type"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"using SDDP\nSDDP.adjust_probability(\n SDDP.Expectation(),\n risk_adjusted_probability,\n nominal_probability,\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.Expectation is the default risk measure in SDDP.jl.","category":"page"},{"location":"guides/add_a_risk_measure/#Worst-case","page":"Add a risk measure","title":"Worst-case","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.WorstCase","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.WorstCase","page":"Add a risk measure","title":"SDDP.WorstCase","text":"WorstCase()\n\nThe worst-case risk measure. Places all of the probability weight on the worst outcome.\n\n\n\n\n\n","category":"type"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.adjust_probability(\n SDDP.WorstCase(),\n risk_adjusted_probability,\n nominal_probability,\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"guides/add_a_risk_measure/#Average-value-at-risk-(AV@R)","page":"Add a risk measure","title":"Average value at risk (AV@R)","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.AVaR","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.AVaR","page":"Add a risk measure","title":"SDDP.AVaR","text":"AVaR(β)\n\nThe average value at risk (AV@R) risk measure.\n\nComputes the expectation of the β fraction of worst outcomes. β must be in [0, 1]. When β=1, this is equivalent to the Expectation risk measure. When β=0, this is equivalent to the WorstCase risk measure.\n\nAV@R is also known as the conditional value at risk (CV@R) or expected shortfall.\n\n\n\n\n\n","category":"type"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.adjust_probability(\n SDDP.AVaR(0.5),\n risk_adjusted_probability,\n nominal_probability,\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"guides/add_a_risk_measure/#Convex-combination-of-risk-measures","page":"Add a risk measure","title":"Convex combination of risk measures","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"Using the axioms of coherent risk measures, it is easy to show that any convex combination of coherent risk measures is also a coherent risk measure. Convex combinations of risk measures can be created directly:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"cvx_comb_measure = 0.5 * SDDP.Expectation() + 0.5 * SDDP.WorstCase()\nSDDP.adjust_probability(\n cvx_comb_measure,\n risk_adjusted_probability,\n nominal_probability,\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"As a special case, the SDDP.EAVaR risk-measure is a convex combination of SDDP.Expectation and SDDP.AVaR:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.EAVaR(beta=0.25, lambda=0.4)","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.EAVaR","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.EAVaR","page":"Add a risk measure","title":"SDDP.EAVaR","text":"EAVaR(;lambda=1.0, beta=1.0)\n\nA risk measure that is a convex combination of Expectation and Average Value @ Risk (also called Conditional Value @ Risk).\n\n λ * E[x] + (1 - λ) * AV@R(β)[x]\n\nKeyword Arguments\n\nlambda: Convex weight on the expectation ((1-lambda) weight is put on the AV@R component. Inreasing values of lambda are less risk averse (more weight on expectation).\nbeta: The quantile at which to calculate the Average Value @ Risk. Increasing values of beta are less risk averse. If beta=0, then the AV@R component is the worst case risk measure.\n\n\n\n\n\n","category":"function"},{"location":"guides/add_a_risk_measure/#Distributionally-robust","page":"Add a risk measure","title":"Distributionally robust","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.jl supports two types of distributionally robust risk measures: the modified Χ² method of Philpott et al. (2018), and a method based on the Wasserstein distance metric.","category":"page"},{"location":"guides/add_a_risk_measure/#Modified-Chi-squard","page":"Add a risk measure","title":"Modified Chi-squard","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.ModifiedChiSquared","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.ModifiedChiSquared","page":"Add a risk measure","title":"SDDP.ModifiedChiSquared","text":"ModifiedChiSquared(radius::Float64; minimum_std=1e-5)\n\nThe distributionally robust SDDP risk measure of Philpott, A., de Matos, V., Kapelevich, L. Distributionally robust SDDP. Computational Management Science (2018) 165:431-454.\n\nExplanation\n\nIn a Distributionally Robust Optimization (DRO) approach, we modify the probabilities we associate with all future scenarios so that the resulting probability distribution is the \"worst case\" probability distribution, in some sense.\n\nIn each backward pass we will compute a worst case probability distribution vector p. We compute p so that:\n\np ∈ argmax p'z\n s.t. [r; p - a] in SecondOrderCone()\n sum(p) == 1\n p >= 0\n\nwhere\n\nz is a vector of future costs. We assume that our aim is to minimize future cost p'z. If we maximize reward, we would have p ∈ argmin{p'z}.\na is the uniform distribution\nr is a user specified radius - the larger the radius, the more conservative the policy.\n\nNotes\n\nThe largest radius that will work with S scenarios is sqrt((S-1)/S).\n\nIf the uncorrected standard deviation of the objecive realizations is less than minimum_std, then the risk-measure will default to Expectation().\n\nThis code was contributed by Lea Kapelevich.\n\n\n\n\n\n","category":"type"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.adjust_probability(\n SDDP.ModifiedChiSquared(0.5),\n risk_adjusted_probability,\n [0.25, 0.25, 0.25, 0.25],\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"guides/add_a_risk_measure/#Wasserstein","page":"Add a risk measure","title":"Wasserstein","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.Wasserstein","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.Wasserstein","page":"Add a risk measure","title":"SDDP.Wasserstein","text":"Wasserstein(norm::Function, solver_factory; alpha::Float64)\n\nA distributionally-robust risk measure based on the Wasserstein distance.\n\nAs alpha increases, the measure becomes more risk-averse. When alpha=0, the measure is equivalent to the expectation operator. As alpha increases, the measure approaches the Worst-case risk measure.\n\n\n\n\n\n","category":"type"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"import HiGHS\nSDDP.adjust_probability(\n SDDP.Wasserstein(HiGHS.Optimizer; alpha=0.5) do x, y\n return abs(x - y)\n end,\n risk_adjusted_probability,\n nominal_probability,\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"guides/add_a_risk_measure/#Entropic","page":"Add a risk measure","title":"Entropic","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.Entropic","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.Entropic","page":"Add a risk measure","title":"SDDP.Entropic","text":"Entropic(γ::Float64)\n\nThe entropic risk measure as described by:\n\nDowson, O., Morton, D.P. & Pagnoncelli, B.K. Incorporating convex risk\nmeasures into multistage stochastic programming algorithms. Annals of\nOperations Research (2022). [doi](https://doi.org/10.1007/s10479-022-04977-w).\n\nAs γ increases, the measure becomes more risk-averse.\n\n\n\n\n\n","category":"type"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.adjust_probability(\n SDDP.Entropic(0.1),\n risk_adjusted_probability,\n nominal_probability,\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"examples/infinite_horizon_trivial/","page":"Infinite horizon trivial","title":"Infinite horizon trivial","text":"EditURL = \"infinite_horizon_trivial.jl\"","category":"page"},{"location":"examples/infinite_horizon_trivial/#Infinite-horizon-trivial","page":"Infinite horizon trivial","title":"Infinite horizon trivial","text":"","category":"section"},{"location":"examples/infinite_horizon_trivial/","page":"Infinite horizon trivial","title":"Infinite horizon trivial","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/infinite_horizon_trivial/","page":"Infinite horizon trivial","title":"Infinite horizon trivial","text":"using SDDP, HiGHS, Test\n\nfunction infinite_trivial()\n graph = SDDP.Graph(\n :root_node,\n [:week],\n [(:root_node => :week, 1.0), (:week => :week, 0.9)],\n )\n model = SDDP.PolicyGraph(\n graph;\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n @variable(subproblem, state, SDDP.State, initial_value = 0)\n @constraint(subproblem, state.in == state.out)\n @stageobjective(subproblem, 2.0)\n end\n SDDP.train(model; log_frequency = 10)\n @test SDDP.calculate_bound(model) ≈ 2.0 / (1 - 0.9) atol = 1e-3\n return\nend\n\ninfinite_trivial()","category":"page"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"EditURL = \"air_conditioning.jl\"","category":"page"},{"location":"examples/air_conditioning/#Air-conditioning","page":"Air conditioning","title":"Air conditioning","text":"","category":"section"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"Taken from Anthony Papavasiliou's notes on SDDP","category":"page"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"Consider the following problem","category":"page"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"Produce air conditioners for 3 months\n200 units/month at 100 $/unit\nOvertime costs 300 $/unit\nKnown demand of 100 units for period 1\nEqually likely demand, 100 or 300 units, for periods 2, 3\nStorage cost is 50 $/unit\nAll demand must be met","category":"page"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"The known optimal solution is $62,500","category":"page"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"using SDDP, HiGHS, Test\n\nfunction air_conditioning_model(duality_handler)\n model = SDDP.LinearPolicyGraph(;\n stages = 3,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, stage\n @variable(\n sp,\n 0 <= stored_production <= 100,\n Int,\n SDDP.State,\n initial_value = 0\n )\n @variable(sp, 0 <= production <= 200, Int)\n @variable(sp, overtime >= 0, Int)\n @variable(sp, demand)\n DEMAND = [[100.0], [100.0, 300.0], [100.0, 300.0]]\n SDDP.parameterize(ω -> JuMP.fix(demand, ω), sp, DEMAND[stage])\n @constraint(\n sp,\n stored_production.out ==\n stored_production.in + production + overtime - demand\n )\n @stageobjective(\n sp,\n 100 * production + 300 * overtime + 50 * stored_production.out\n )\n end\n SDDP.train(model; duality_handler = duality_handler)\n @test isapprox(SDDP.calculate_bound(model), 62_500.0, atol = 0.1)\n return\nend\n\nfor duality_handler in [SDDP.LagrangianDuality(), SDDP.ContinuousConicDuality()]\n air_conditioning_model(duality_handler)\nend","category":"page"},{"location":"examples/sldp_example_two/","page":"SLDP: example 2","title":"SLDP: example 2","text":"EditURL = \"sldp_example_two.jl\"","category":"page"},{"location":"examples/sldp_example_two/#SLDP:-example-2","page":"SLDP: example 2","title":"SLDP: example 2","text":"","category":"section"},{"location":"examples/sldp_example_two/","page":"SLDP: example 2","title":"SLDP: example 2","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/sldp_example_two/","page":"SLDP: example 2","title":"SLDP: example 2","text":"This example is derived from Section 4.3 of the paper: Ahmed, S., Cabral, F. G., & da Costa, B. F. P. (2019). Stochastic Lipschitz Dynamic Programming. Optimization Online. PDF","category":"page"},{"location":"examples/sldp_example_two/","page":"SLDP: example 2","title":"SLDP: example 2","text":"using SDDP\nimport HiGHS\nimport Test\n\nfunction sldp_example_two(; first_stage_integer::Bool = true, N = 2)\n model = SDDP.LinearPolicyGraph(;\n stages = 2,\n lower_bound = -100.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n @variable(sp, 0 <= x[1:2] <= 5, SDDP.State, initial_value = 0.0)\n if t == 1\n if first_stage_integer\n @variable(sp, 0 <= u[1:2] <= 5, Int)\n @constraint(sp, [i = 1:2], u[i] == x[i].out)\n end\n @stageobjective(sp, -1.5 * x[1].out - 4 * x[2].out)\n else\n @variable(sp, 0 <= y[1:4] <= 1, Bin)\n @variable(sp, ω[1:2])\n @stageobjective(sp, -16 * y[1] - 19 * y[2] - 23 * y[3] - 28 * y[4])\n @constraint(\n sp,\n 2 * y[1] + 3 * y[2] + 4 * y[3] + 5 * y[4] <= ω[1] - x[1].in\n )\n @constraint(\n sp,\n 6 * y[1] + 1 * y[2] + 3 * y[3] + 2 * y[4] <= ω[2] - x[2].in\n )\n steps = range(5; stop = 15, length = N)\n SDDP.parameterize(sp, [[i, j] for i in steps for j in steps]) do φ\n return JuMP.fix.(ω, φ)\n end\n end\n end\n if get(ARGS, 1, \"\") == \"--write\"\n # Run `$ julia sldp_example_two.jl --write` to update the benchmark\n # model directory\n model_dir = joinpath(@__DIR__, \"..\", \"..\", \"..\", \"benchmarks\", \"models\")\n SDDP.write_to_file(\n model,\n joinpath(model_dir, \"sldp_example_two_$(N).sof.json.gz\");\n test_scenarios = 30,\n )\n return\n end\n SDDP.train(model; log_frequency = 10)\n bound = SDDP.calculate_bound(model)\n\n if N == 2\n Test.@test bound <= -57.0\n elseif N == 3\n Test.@test bound <= -59.33\n elseif N == 6\n Test.@test bound <= -61.22\n end\n return\nend\n\nsldp_example_two(; N = 2)\nsldp_example_two(; N = 3)\nsldp_example_two(; N = 6)","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"EditURL = \"objective_states.jl\"","category":"page"},{"location":"tutorial/objective_states/#Objective-states","page":"Objective states","title":"Objective states","text":"","category":"section"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"There are many applications in which we want to model a price process that follows some auto-regressive process. Common examples include stock prices on financial exchanges and spot-prices in energy markets.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"However, it is well known that these cannot be incorporated in to SDDP because they result in cost-to-go functions that are convex with respect to some state variables (e.g., the reservoir levels) and concave with respect to other state variables (e.g., the spot price in the current stage).","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"To overcome this problem, the approach in the literature has been to discretize the price process in order to model it using a Markovian policy graph like those discussed in Markovian policy graphs.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"However, recent work offers a way to include stagewise-dependent objective uncertainty into the objective function of SDDP subproblems. Readers are directed to the following works for an introduction:","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"Downward, A., Dowson, O., and Baucke, R. (2017). Stochastic dual dynamic programming with stagewise dependent objective uncertainty. Optimization Online. link\nDowson, O. PhD Thesis. University of Auckland, 2018. link","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"The method discussed in the above works introduces the concept of an objective state into SDDP. Unlike normal state variables in SDDP (e.g., the volume of water in the reservoir), the cost-to-go function is concave with respect to the objective states. Thus, the method builds an outer approximation of the cost-to-go function in the normal state-space, and an inner approximation of the cost-to-go function in the objective state-space.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"warning: Warning\nSupport for objective states in SDDP.jl is experimental. Models are considerably more computational intensive, the interface is less user-friendly, and there are subtle gotchas to be aware of. Only use this if you have read and understood the theory behind the method.","category":"page"},{"location":"tutorial/objective_states/#One-dimensional-objective-states","page":"Objective states","title":"One-dimensional objective states","text":"","category":"section"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"Let's assume that the fuel cost is not fixed, but instead evolves according to a multiplicative auto-regressive process: fuel_cost[t] = ω * fuel_cost[t-1], where ω is drawn from the sample space [0.75, 0.9, 1.1, 1.25] with equal probability.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"An objective state can be added to a subproblem using the SDDP.add_objective_state function. This can only be called once per subproblem. If you want to add a multi-dimensional objective state, read Multi-dimensional objective states. SDDP.add_objective_state takes a number of keyword arguments. The two required ones are","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"initial_value: the value of the objective state at the root node of the policy graph (i.e., identical to the initial_value when defining normal state variables.\nlipschitz: the Lipschitz constant of the cost-to-go function with respect to the objective state. In other words, this value is the maximum change in the cost-to-go function at any point in the state space, given a one-unit change in the objective state.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"There are also two optional keyword arguments: lower_bound and upper_bound, which give SDDP.jl hints (importantly, not constraints) about the domain of the objective state. Setting these bounds appropriately can improve the speed of convergence.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"Finally, SDDP.add_objective_state requires an update function. This function takes two arguments. The first is the incoming value of the objective state, and the second is the realization of the stagewise-independent noise term (set using SDDP.parameterize). The function should return the value of the objective state to be used in the current subproblem.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"This connection with the stagewise-independent noise term means that SDDP.parameterize must be called in a subproblem that defines an objective state. Inside SDDP.parameterize, the value of the objective state to be used in the current subproblem (i.e., after the update function), can be queried using SDDP.objective_state.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"Here is the full model with the objective state.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"using SDDP, HiGHS\n\nmodel = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, t\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n @constraints(\n subproblem,\n begin\n volume.out == volume.in + inflow - hydro_generation - hydro_spill\n demand_constraint, thermal_generation + hydro_generation == 150.0\n end\n )\n\n # Add an objective state. ω will be the same value that is called in\n # `SDDP.parameterize`.\n\n SDDP.add_objective_state(\n subproblem;\n initial_value = 50.0,\n lipschitz = 10_000.0,\n lower_bound = 50.0,\n upper_bound = 150.0,\n ) do fuel_cost, ω\n return ω.fuel * fuel_cost\n end\n\n # Create the cartesian product of a multi-dimensional random variable.\n\n Ω = [\n (fuel = f, inflow = w) for f in [0.75, 0.9, 1.1, 1.25] for\n w in [0.0, 50.0, 100.0]\n ]\n\n SDDP.parameterize(subproblem, Ω) do ω\n # Query the current fuel cost.\n fuel_cost = SDDP.objective_state(subproblem)\n @stageobjective(subproblem, fuel_cost * thermal_generation)\n return JuMP.fix(inflow, ω.inflow)\n end\nend","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"After creating our model, we can train and simulate as usual.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"SDDP.train(model; run_numerical_stability_report = false)\n\nsimulations = SDDP.simulate(model, 1)\n\nprint(\"Finished training and simulating.\")","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"To demonstrate how the objective states are updated, consider the sequence of noise observations:","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"[stage[:noise_term] for stage in simulations[1]]","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"This, the fuel cost in the first stage should be 0.75 * 50 = 37.5. The fuel cost in the second stage should be 1.1 * 37.5 = 41.25. The fuel cost in the third stage should be 0.75 * 41.25 = 30.9375.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"To confirm this, the values of the objective state in a simulation can be queried using the :objective_state key.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"[stage[:objective_state] for stage in simulations[1]]","category":"page"},{"location":"tutorial/objective_states/#Multi-dimensional-objective-states","page":"Objective states","title":"Multi-dimensional objective states","text":"","category":"section"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"You can construct multi-dimensional price processes using NTuples. Just replace every scalar value associated with the objective state by a tuple. For example, initial_value = 1.0 becomes initial_value = (1.0, 2.0).","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"Here is an example:","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"model = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, t\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n @constraints(\n subproblem,\n begin\n volume.out == volume.in + inflow - hydro_generation - hydro_spill\n demand_constraint, thermal_generation + hydro_generation == 150.0\n end\n )\n\n SDDP.add_objective_state(\n subproblem;\n initial_value = (50.0, 50.0),\n lipschitz = (10_000.0, 10_000.0),\n lower_bound = (50.0, 50.0),\n upper_bound = (150.0, 150.0),\n ) do fuel_cost, ω\n # fuel_cost is a tuple, containing the (fuel_cost[t-1], fuel_cost[t-2])\n # This function returns a new tuple containing\n # (fuel_cost[t], fuel_cost[t-1]). Thus, we need to compute the new\n # cost:\n new_cost = fuel_cost[1] + 0.5 * (fuel_cost[1] - fuel_cost[2]) + ω.fuel\n # And then return the appropriate tuple:\n return (new_cost, fuel_cost[1])\n end\n\n Ω = [\n (fuel = f, inflow = w) for f in [-10.0, -5.0, 5.0, 10.0] for\n w in [0.0, 50.0, 100.0]\n ]\n\n SDDP.parameterize(subproblem, Ω) do ω\n fuel_cost, _ = SDDP.objective_state(subproblem)\n @stageobjective(subproblem, fuel_cost * thermal_generation)\n return JuMP.fix(inflow, ω.inflow)\n end\nend\n\nSDDP.train(model; run_numerical_stability_report = false)\n\nsimulations = SDDP.simulate(model, 1)\n\nprint(\"Finished training and simulating.\")","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"This time, since our objective state is two-dimensional, the objective states are tuples with two elements:","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"[stage[:objective_state] for stage in simulations[1]]","category":"page"},{"location":"tutorial/objective_states/#objective_state_warnings","page":"Objective states","title":"Warnings","text":"","category":"section"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"There are number of things to be aware of when using objective states.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"The key assumption is that price is independent of the states and actions in the model.\nThat means that the price cannot appear in any @constraints. Nor can you use any @variables in the update function.\nChoosing an appropriate Lipschitz constant is difficult.\nThe points discussed in Choosing an initial bound are relevant. The Lipschitz constant should not be chosen as large as possible (since this will help with convergence and the numerical issues discussed above), but if chosen to small, it may cut of the feasible region and lead to a sub-optimal solution.\nYou need to ensure that the cost-to-go function is concave with respect to the objective state before the update.\nIf the update function is linear, this is always the case. In some situations, the update function can be nonlinear (e.g., multiplicative as we have above). In general, placing constraints on the price (e.g., clamp(price, 0, 1)) will destroy concavity. Caveat emptor. It's up to you if this is a problem. If it isn't you'll get a good heuristic with no guarantee of global optimality.","category":"page"},{"location":"examples/air_conditioning_forward/","page":"Training with a different forward model","title":"Training with a different forward model","text":"EditURL = \"air_conditioning_forward.jl\"","category":"page"},{"location":"examples/air_conditioning_forward/#Training-with-a-different-forward-model","page":"Training with a different forward model","title":"Training with a different forward model","text":"","category":"section"},{"location":"examples/air_conditioning_forward/","page":"Training with a different forward model","title":"Training with a different forward model","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/air_conditioning_forward/","page":"Training with a different forward model","title":"Training with a different forward model","text":"using SDDP\nimport HiGHS\nimport Test\n\nfunction create_air_conditioning_model(; convex::Bool)\n return SDDP.LinearPolicyGraph(;\n stages = 3,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n @variable(sp, 0 <= x <= 100, SDDP.State, initial_value = 0)\n @variable(sp, 0 <= u_production <= 200)\n @variable(sp, u_overtime >= 0)\n if !convex\n set_integer(x.out)\n set_integer(u_production)\n set_integer(u_overtime)\n end\n @constraint(sp, demand, x.in - x.out + u_production + u_overtime == 0)\n Ω = [[100.0], [100.0, 300.0], [100.0, 300.0]]\n SDDP.parameterize(ω -> JuMP.set_normalized_rhs(demand, ω), sp, Ω[t])\n @stageobjective(sp, 100 * u_production + 300 * u_overtime + 50 * x.out)\n end\nend\n\nconvex = create_air_conditioning_model(; convex = true)\nnon_convex = create_air_conditioning_model(; convex = false)\nSDDP.train(\n convex;\n forward_pass = SDDP.AlternativeForwardPass(non_convex),\n post_iteration_callback = SDDP.AlternativePostIterationCallback(non_convex),\n iteration_limit = 10,\n)\nTest.@test isapprox(SDDP.calculate_bound(non_convex), 62_500.0, atol = 0.1)\nTest.@test isapprox(SDDP.calculate_bound(convex), 62_500.0, atol = 0.1)","category":"page"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":"EditURL = \"objective_state_newsvendor.jl\"","category":"page"},{"location":"examples/objective_state_newsvendor/#Newsvendor","page":"Newsvendor","title":"Newsvendor","text":"","category":"section"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":"This example is based on the classical newsvendor problem, but features an AR(1) spot-price.","category":"page"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":" V(x[t-1], ω[t]) = max p[t] × u[t]\n subject to x[t] = x[t-1] - u[t] + ω[t]\n u[t] ∈ [0, 1]\n x[t] ≥ 0\n p[t] = p[t-1] + ϕ[t]","category":"page"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":"The initial conditions are","category":"page"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":"x[0] = 2.0\np[0] = 1.5\nω[t] ~ {0, 0.05, 0.10, ..., 0.45, 0.5} with uniform probability.\nϕ[t] ~ {-0.25, -0.125, 0.125, 0.25} with uniform probability.","category":"page"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":"using SDDP, HiGHS, Statistics, Test\n\nfunction joint_distribution(; kwargs...)\n names = tuple([first(kw) for kw in kwargs]...)\n values = tuple([last(kw) for kw in kwargs]...)\n output_type = NamedTuple{names,Tuple{eltype.(values)...}}\n distribution = map(output_type, Base.product(values...))\n return distribution[:]\nend\n\nfunction newsvendor_example(; cut_type)\n model = SDDP.PolicyGraph(\n SDDP.LinearGraph(3);\n sense = :Max,\n upper_bound = 50.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, stage\n @variables(subproblem, begin\n x >= 0, (SDDP.State, initial_value = 2)\n 0 <= u <= 1\n w\n end)\n @constraint(subproblem, x.out == x.in - u + w)\n SDDP.add_objective_state(\n subproblem;\n initial_value = 1.5,\n lower_bound = 0.75,\n upper_bound = 2.25,\n lipschitz = 100.0,\n ) do y, ω\n return y + ω.price_noise\n end\n noise_terms = joint_distribution(;\n demand = 0:0.05:0.5,\n price_noise = [-0.25, -0.125, 0.125, 0.25],\n )\n SDDP.parameterize(subproblem, noise_terms) do ω\n JuMP.fix(w, ω.demand)\n price = SDDP.objective_state(subproblem)\n @stageobjective(subproblem, price * u)\n end\n end\n SDDP.train(\n model;\n log_frequency = 10,\n time_limit = 20.0,\n cut_type = cut_type,\n )\n @test SDDP.calculate_bound(model) ≈ 4.04 atol = 0.05\n results = SDDP.simulate(model, 500)\n objectives =\n [sum(s[:stage_objective] for s in simulation) for simulation in results]\n @test round(Statistics.mean(objectives); digits = 2) ≈ 4.04 atol = 0.1\n return\nend\n\nnewsvendor_example(; cut_type = SDDP.SINGLE_CUT)\nnewsvendor_example(; cut_type = SDDP.MULTI_CUT)","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"EditURL = \"arma.jl\"","category":"page"},{"location":"tutorial/arma/#Auto-regressive-stochastic-processes","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"","category":"section"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"SDDP.jl assumes that the random variable in each node is independent of the random variables in all other nodes. However, a common request is to model the random variables by some auto-regressive process.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"There are two ways to do this:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"model the random variable as a Markov chain\nuse the \"state-space expansion\" trick","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"info: Info\nThis tutorial is in the context of a hydro-thermal scheduling example, but it should be apparent how the ideas transfer to other applications.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"using SDDP\nimport HiGHS","category":"page"},{"location":"tutorial/arma/#state-space-expansion","page":"Auto-regressive stochastic processes","title":"The state-space expansion trick","text":"","category":"section"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"In An introduction to SDDP.jl, we assumed that the inflows were stagewise-independent. However, in many cases this is not correct, and inflow models are more accurately described by an auto-regressive process such as:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"inflow_t = inflow_t-1 + varepsilon","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"Here varepsilon is a random variable, and the inflow in stage t is the inflow in stage t-1 plus varepsilon (which might be negative).","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"For simplicity, we omit any coefficients and other terms, but this could easily be extended to a model like","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"inflow_t = a times inflow_t-1 + b + varepsilon","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"In practice, you can estimate a distribution for varepsilon by fitting the chosen statistical model to historical data, and then using the empirical residuals.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"To implement the auto-regressive model in SDDP.jl, we introduce inflow as a state variable.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"tip: Tip\nOur rule of thumb for \"when is something a state variable?\" is: if you need the value of a variable from a previous stage to compute something in stage t, then that variable is a state variable.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"model = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(sp, 0 <= x <= 200, SDDP.State, initial_value = 200)\n @variable(sp, g_t >= 0)\n @variable(sp, g_h >= 0)\n @variable(sp, s >= 0)\n @constraint(sp, g_h + g_t == 150)\n c = [50, 100, 150]\n @stageobjective(sp, c[t] * g_t)\n # =========================================================================\n # New stuff below Here\n # Add inflow as a state\n @variable(sp, inflow, SDDP.State, initial_value = 50.0)\n # Add the random variable as a control variable\n @variable(sp, ε)\n # The equation describing our statistical model\n @constraint(sp, inflow.out == inflow.in + ε)\n # The new water balance constraint using the state variable\n @constraint(sp, x.out == x.in - g_h - s + inflow.out)\n # Assume we have some empirical residuals:\n Ω = [-10.0, 0.1, 9.6]\n SDDP.parameterize(sp, Ω) do ω\n return JuMP.fix(ε, ω)\n end\nend","category":"page"},{"location":"tutorial/arma/#When-can-this-trick-be-used?","page":"Auto-regressive stochastic processes","title":"When can this trick be used?","text":"","category":"section"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The state-space expansion trick should be used when:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The random variable appears additively in the objective or in the constraints. Something like inflow * decision_variable will not work.\nThe statistical model is linear, or can be written using the JuMP @constraint macro.\nThe dimension of the random variable is small (see Vector auto-regressive models for the multi-variate case).","category":"page"},{"location":"tutorial/arma/#The-Markov-chain-approach","page":"Auto-regressive stochastic processes","title":"The Markov chain approach","text":"","category":"section"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"In the Markov chain approach, we model the stochastic process for inflow by a discrete Markov chain. Markov chains are nodes with transition probabilities between the nodes. SDDP.jl has good support for solving problems in which the uncertainty is formulated as a Markov chain.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The first step of the Markov chain approach is to write a function which simulates the stochastic process. Here is a simulator for our inflow model:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"function simulator()\n inflow = zeros(3)\n current = 50.0\n Ω = [-10.0, 0.1, 9.6]\n for t in 1:3\n current += rand(Ω)\n inflow[t] = current\n end\n return inflow\nend","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"When called with no arguments, it produces a vector of inflows:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"simulator()","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"warning: Warning\nThe simulator must return a Vector{Float64}, so it is limited to a uni-variate random variable. It is possible to do something similar for multi-variate random variable, but you'll have to manually construct the Markov transition matrix, and solution times scale poorly, even in the two-dimensional case.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The next step is to call SDDP.MarkovianGraph with our simulator. This function will attempt to fit a Markov chain to the stochastic process produced by your simulator. There are two key arguments:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"budget is the total number of nodes we want in the Markov chain\nscenarios is a limit on the number of times we can call simulator","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"graph = SDDP.MarkovianGraph(simulator; budget = 8, scenarios = 30)","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"Here we can see we have created a MarkovianGraph with nodes like (2, 59.7). The first element of each node is the stage, and the second element is the inflow.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"Create a SDDP.PolicyGraph using graph as follows:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"model = SDDP.PolicyGraph(\n graph; # <--- New stuff\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, node\n t, inflow = node # <--- New stuff\n @variable(sp, 0 <= x <= 200, SDDP.State, initial_value = 200)\n @variable(sp, g_t >= 0)\n @variable(sp, g_h >= 0)\n @variable(sp, s >= 0)\n @constraint(sp, g_h + g_t == 150)\n c = [50, 100, 150]\n @stageobjective(sp, c[t] * g_t)\n # The new water balance constraint using the node:\n @constraint(sp, x.out == x.in - g_h - s + inflow)\nend","category":"page"},{"location":"tutorial/arma/#When-can-this-trick-be-used?-2","page":"Auto-regressive stochastic processes","title":"When can this trick be used?","text":"","category":"section"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The Markov chain approach should be used when:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The random variable is uni-variate\nThe random variable appears in the objective function or as a variable coefficient in the constraint matrix\nIt's non-trivial to write the stochastic process as a series of constraints (for example, it uses nonlinear terms)\nThe number of nodes is modest (for example, a budget of hundreds, up to perhaps 1000)","category":"page"},{"location":"tutorial/arma/#Vector-auto-regressive-models","page":"Auto-regressive stochastic processes","title":"Vector auto-regressive models","text":"","category":"section"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The state-space expansion section assumed that the random variable was uni-variate. However, the approach naturally extends to vector auto-regressive models. For example, if inflow is a 2-dimensional vector, then we can model a vector auto-regressive model to it as follows:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"inflow_t = A times inflow_t-1 + b + varepsilon","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"Here A is a 2-by-2 matrix, and b and varepsilon are 2-by-1 vectors.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"model = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(sp, 0 <= x <= 200, SDDP.State, initial_value = 200)\n @variable(sp, g_t >= 0)\n @variable(sp, g_h >= 0)\n @variable(sp, s >= 0)\n @constraint(sp, g_h + g_t == 150)\n c = [50, 100, 150]\n @stageobjective(sp, c[t] * g_t)\n # =========================================================================\n # New stuff below Here\n # Add inflow as a state\n @variable(sp, inflow[1:2], SDDP.State, initial_value = 50.0)\n # Add the random variable as a control variable\n @variable(sp, ε[1:2])\n # The equation describing our statistical model\n A = [0.8 0.2; 0.2 0.8]\n @constraint(\n sp,\n [i = 1:2],\n inflow[i].out == sum(A[i, j] * inflow[j].in for j in 1:2) + ε[i],\n )\n # The new water balance constraint using the state variable\n @constraint(sp, x.out == x.in - g_h - s + inflow[1].out + inflow[2].out)\n # Assume we have some empirical residuals:\n Ω₁ = [-10.0, 0.1, 9.6]\n Ω₂ = [-10.0, 0.1, 9.6]\n Ω = [(ω₁, ω₂) for ω₁ in Ω₁ for ω₂ in Ω₂]\n SDDP.parameterize(sp, Ω) do ω\n JuMP.fix(ε[1], ω[1])\n JuMP.fix(ε[2], ω[2])\n return\n end\nend","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"EditURL = \"mdps.jl\"","category":"page"},{"location":"tutorial/mdps/#Example:-Markov-Decision-Processes","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"","category":"section"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"SDDP.jl can be used to solve a variety of Markov Decision processes. If the problem has continuous state and control spaces, and the objective and transition function are convex, then SDDP.jl can find a globally optimal policy. In other cases, SDDP.jl will find a locally optimal policy.","category":"page"},{"location":"tutorial/mdps/#A-simple-example","page":"Example: Markov Decision Processes","title":"A simple example","text":"","category":"section"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"A simple demonstration of this is the example taken from page 98 of the book \"Markov Decision Processes: Discrete stochastic Dynamic Programming\", by Martin L. Putterman.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"The example, as described in Section 4.6.3 of the book, is to minimize a sum of squares of N non-negative variables, subject to a budget constraint that the variable values add up to M. Put mathematically, that is:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"beginaligned\nmin sumlimits_i=1^N x_i^2 \nst sumlimits_i=1^N x_i = M \n x_i ge 0 quad i in 1ldotsN\nendaligned","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"The optimal objective value is M^2N, and the optimal solution is x_i = M N, which can be shown by induction.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"This can be reformulated as a Markov Decision Process by introducing a state variable, s, which tracks the un-spent budget over N stages.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"beginaligned\nV_t(s) = min x^2 + V_t+1(s^prime) \nst s^prime = s - x \n x le s \n x ge 0 \n s ge 0\nendaligned","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"and in the last stage V_N, there is an additional constraint that s^prime = 0.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"The budget of M is computed by solving for V_1(M).","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"info: Info\nSince everything here is continuous and convex, SDDP.jl will find the globally optimal policy.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"If the reformulation from the single problem into the recursive form of the Markov Decision Process is not obvious, consult Putterman's book.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"We can model and solve this problem using SDDP.jl as follows:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"using SDDP\nimport Ipopt\n\nM, N = 5, 3\n\nmodel = SDDP.LinearPolicyGraph(;\n stages = N,\n lower_bound = 0.0,\n optimizer = Ipopt.Optimizer,\n) do subproblem, node\n @variable(subproblem, s >= 0, SDDP.State, initial_value = M)\n @variable(subproblem, x >= 0)\n @stageobjective(subproblem, x^2)\n @constraint(subproblem, x <= s.in)\n @constraint(subproblem, s.out == s.in - x)\n if node == N\n fix(s.out, 0.0; force = true)\n end\n return\nend\n\nSDDP.train(model)","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Check that we got the theoretical optimum:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"SDDP.calculate_bound(model), M^2 / N","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"And check that we found the theoretical value for each x_i:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"simulations = SDDP.simulate(model, 1, [:x])\nfor data in simulations[1]\n println(\"x_$(data[:node_index]) = $(data[:x])\")\nend","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Close enough! We don't get exactly 5/3 because of numerical tolerances within our choice of optimization solver (in this case, Ipopt).","category":"page"},{"location":"tutorial/mdps/#A-more-complicated-policy","page":"Example: Markov Decision Processes","title":"A more complicated policy","text":"","category":"section"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"SDDP.jl is also capable of finding policies for other types of Markov Decision Processes. A classic example of a Markov Decision Process is the problem of finding a path through a maze.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Here's one example of a maze. Try changing the parameters to explore different mazes:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"M, N = 3, 4\ninitial_square = (1, 1)\nreward, illegal_squares, penalties = (3, 4), [(2, 2)], [(3, 1), (2, 4)]\npath = fill(\"⋅\", M, N)\npath[initial_square...] = \"1\"\nfor (k, v) in (illegal_squares => \"▩\", penalties => \"†\", [reward] => \"*\")\n for (i, j) in k\n path[i, j] = v\n end\nend\nprint(join([join(path[i, :], ' ') for i in 1:size(path, 1)], '\\n'))","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Our goal is to get from square 1 to square *. If we step on a †, we incur a penalty of 1. Squares with ▩ are blocked; we cannot move there.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"There are a variety of ways that we can solve this problem. We're going to solve it using a stationary binary stochastic programming formulation.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Our state variable will be a matrix of binary variables x_ij, where each element is 1 if the agent is in the square and 0 otherwise. In each period, we incur a reward of 1 if we are in the reward square and a penalty of -1 if we are in a penalties square. We cannot move to the illegal_squares, so those x_ij = 0. Feasibility between moves is modelled by constraints of the form:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"x^prime_ij le sumlimits_(ab)in P x_ab","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"where P is the set of squares from which it is valid to move from (a, b) to (i, j).","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Because we are looking for a stationary policy, we need a unicyclic graph with a discount factor:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"discount_factor = 0.9\ngraph = SDDP.UnicyclicGraph(discount_factor)","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Then we can formulate our full model:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"import HiGHS\n\nmodel = SDDP.PolicyGraph(\n graph;\n sense = :Max,\n upper_bound = 1 / (1 - discount_factor),\n optimizer = HiGHS.Optimizer,\n) do sp, _\n # Our state is a binary variable for each square\n @variable(\n sp,\n x[i = 1:M, j = 1:N],\n Bin,\n SDDP.State,\n initial_value = (i, j) == initial_square,\n )\n # Can only be in one square at a time\n @constraint(sp, sum(x[i, j].out for i in 1:M, j in 1:N) == 1)\n # Incur rewards and penalties\n @stageobjective(\n sp,\n x[reward...].out - sum(x[i, j].out for (i, j) in penalties)\n )\n # Some squares are illegal\n @constraint(sp, [(i, j) in illegal_squares], x[i, j].out <= 0)\n # Constraints on valid moves\n for i in 1:M, j in 1:N\n moves = [(i - 1, j), (i + 1, j), (i, j), (i, j + 1), (i, j - 1)]\n filter!(v -> 1 <= v[1] <= M && 1 <= v[2] <= N, moves)\n @constraint(sp, x[i, j].out <= sum(x[a, b].in for (a, b) in moves))\n end\n return\nend","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"The upper bound is obtained by assuming that we reach the reward square in one move and stay there.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"warning: Warning\nSince there are discrete decisions here, SDDP.jl is not guaranteed to find the globally optimal policy.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"SDDP.train(model)","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Simulating a cyclic policy graph requires an explicit sampling_scheme that does not terminate early based on the cycle probability:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"simulations = SDDP.simulate(\n model,\n 1,\n [:x];\n sampling_scheme = SDDP.InSampleMonteCarlo(;\n max_depth = 5,\n terminate_on_dummy_leaf = false,\n ),\n);\nnothing #hide","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Fill in the path with the time-step in which we visit the square:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"for (t, data) in enumerate(simulations[1]), i in 1:M, j in 1:N\n if data[:x][i, j].in > 0.5\n path[i, j] = \"$t\"\n end\nend\n\nprint(join([join(path[i, :], ' ') for i in 1:size(path, 1)], '\\n'))","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"tip: Tip\nThis formulation will likely struggle as the number of cells in the maze increases. Can you think of an equivalent formulation that uses fewer state variables?","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"EditURL = \"Hydro_thermal.jl\"","category":"page"},{"location":"examples/Hydro_thermal/#Hydro-thermal-scheduling","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/Hydro_thermal/#Problem-Description","page":"Hydro-thermal scheduling","title":"Problem Description","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"In a hydro-thermal problem, the agent controls a hydro-electric generator and reservoir. Each time period, they need to choose a generation quantity from thermal g_t, and hydro g_h, in order to meet demand w_d, which is a stagewise-independent random variable. The state variable, x, is the quantity of water in the reservoir at the start of each time period, and it has a minimum level of 5 units and a maximum level of 15 units. We assume that there are 10 units of water in the reservoir at the start of time, so that x_0 = 10. The state-variable is connected through time by the water balance constraint: x.out = x.in - g_h - s + w_i, where x.out is the quantity of water at the end of the time period, x.in is the quantity of water at the start of the time period, s is the quantity of water spilled from the reservoir, and w_i is a stagewise-independent random variable that represents the inflow into the reservoir during the time period.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"We assume that there are three stages, t=1, 2, 3, representing summer-fall, winter, and spring, and that we are solving this problem in an infinite-horizon setting with a discount factor of 0.95.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"In each stage, the agent incurs the cost of spillage, plus the cost of thermal generation. We assume that the cost of thermal generation is dependent on the stage t = 1, 2, 3, and that in each stage, w is drawn from the set (w_i, w_d) = {(0, 7.5), (3, 5), (10, 2.5)} with equal probability.","category":"page"},{"location":"examples/Hydro_thermal/#Importing-packages","page":"Hydro-thermal scheduling","title":"Importing packages","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"For this example, in addition to SDDP, we need HiGHS as a solver and Statisitics to compute the mean of our simulations.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"using HiGHS\nusing SDDP\nusing Statistics","category":"page"},{"location":"examples/Hydro_thermal/#Constructing-the-policy-graph","page":"Hydro-thermal scheduling","title":"Constructing the policy graph","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"There are three stages in our infinite-horizon problem, so we construct a unicyclic policy graph using SDDP.UnicyclicGraph:","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"graph = SDDP.UnicyclicGraph(0.95; num_nodes = 3)","category":"page"},{"location":"examples/Hydro_thermal/#Constructing-the-model","page":"Hydro-thermal scheduling","title":"Constructing the model","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"Much of the macro code (i.e., lines starting with @) in the first part of the following should be familiar to users of JuMP.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"Inside the do-end block, sp is a standard JuMP model, and t is an index for the state variable that will be called with t = 1, 2, 3.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"The state variable x, constructed by passing the SDDP.State tag to @variable is actually a Julia struct with two fields: x.in and x.out corresponding to the incoming and outgoing state variables respectively. Both x.in and x.out are standard JuMP variables. The initial_value keyword provides the value of the state variable in the root node (i.e., x_0).","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"Compared to a JuMP model, one key difference is that we use @stageobjective instead of @objective. The SDDP.parameterize function takes a list of supports for w and parameterizes the JuMP model sp by setting the right-hand sides of the appropriate constraints (note how the constraints initially have a right-hand side of 0). By default, it is assumed that the realizations have uniform probability, but a probability mass vector can also be provided.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"model = SDDP.PolicyGraph(\n graph;\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(sp, 5 <= x <= 15, SDDP.State, initial_value = 10)\n @variable(sp, g_t >= 0)\n @variable(sp, g_h >= 0)\n @variable(sp, s >= 0)\n @constraint(sp, balance, x.out - x.in + g_h + s == 0)\n @constraint(sp, demand, g_h + g_t == 0)\n @stageobjective(sp, s + t * g_t)\n SDDP.parameterize(sp, [[0, 7.5], [3, 5], [10, 2.5]]) do w\n set_normalized_rhs(balance, w[1])\n return set_normalized_rhs(demand, w[2])\n end\nend","category":"page"},{"location":"examples/Hydro_thermal/#Training-the-policy","page":"Hydro-thermal scheduling","title":"Training the policy","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"Once a model has been constructed, the next step is to train the policy. This can be achieved using SDDP.train. There are many options that can be passed, but iteration_limit terminates the training after the prescribed number of SDDP iterations.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"SDDP.train(model; iteration_limit = 100)","category":"page"},{"location":"examples/Hydro_thermal/#Simulating-the-policy","page":"Hydro-thermal scheduling","title":"Simulating the policy","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"After training, we can simulate the policy using SDDP.simulate.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"sims = SDDP.simulate(model, 100, [:g_t])\nmu = round(mean([s[1][:g_t] for s in sims]); digits = 2)\nprintln(\"On average, $(mu) units of thermal are used in the first stage.\")","category":"page"},{"location":"examples/Hydro_thermal/#Extracting-the-water-values","page":"Hydro-thermal scheduling","title":"Extracting the water values","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"Finally, we can use SDDP.ValueFunction and SDDP.evaluate to obtain and evaluate the value function at different points in the state-space. Note that since we are minimizing, the price has a negative sign: each additional unit of water leads to a decrease in the expected long-run cost.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"V = SDDP.ValueFunction(model[1])\ncost, price = SDDP.evaluate(V; x = 10)","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"EditURL = \"hydro_valley.jl\"","category":"page"},{"location":"examples/hydro_valley/#Hydro-valleys","page":"Hydro valleys","title":"Hydro valleys","text":"","category":"section"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"This problem is a version of the hydro-thermal scheduling problem. The goal is to operate two hydro-dams in a valley chain over time in the face of inflow and price uncertainty.","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"Turbine response curves are modelled by piecewise linear functions which map the flow rate into a power. These can be controlled by specifying the breakpoints in the piecewise linear function as the knots in the Turbine struct.","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"The model can be created using the hydro_valley_model function. It has a few keyword arguments to allow automated testing of the library. hasstagewiseinflows determines if the RHS noise constraint should be added. hasmarkovprice determines if the price uncertainty (modelled by a Markov chain) should be added.","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"In the third stage, the Markov chain has some unreachable states to test some code-paths in the library.","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"We can also set the sense to :Min or :Max (the objective and bound are flipped appropriately).","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"using SDDP, HiGHS, Test, Random\n\nstruct Turbine\n flowknots::Vector{Float64}\n powerknots::Vector{Float64}\nend\n\nstruct Reservoir\n min::Float64\n max::Float64\n initial::Float64\n turbine::Turbine\n spill_cost::Float64\n inflows::Vector{Float64}\nend\n\nfunction hydro_valley_model(;\n hasstagewiseinflows::Bool = true,\n hasmarkovprice::Bool = true,\n sense::Symbol = :Max,\n)\n valley_chain = [\n Reservoir(\n 0,\n 200,\n 200,\n Turbine([50, 60, 70], [55, 65, 70]),\n 1000,\n [0, 20, 50],\n ),\n Reservoir(\n 0,\n 200,\n 200,\n Turbine([50, 60, 70], [55, 65, 70]),\n 1000,\n [0, 0, 20],\n ),\n ]\n\n turbine(i) = valley_chain[i].turbine\n\n # Prices[t, Markov state]\n prices = [\n 1 2 0\n 2 1 0\n 3 4 0\n ]\n\n # Transition matrix\n if hasmarkovprice\n transition =\n Array{Float64,2}[[1.0]', [0.6 0.4], [0.6 0.4 0.0; 0.3 0.7 0.0]]\n else\n transition = [ones(Float64, (1, 1)) for t in 1:3]\n end\n\n flipobj = (sense == :Max) ? 1.0 : -1.0\n lower = (sense == :Max) ? -Inf : -1e6\n upper = (sense == :Max) ? 1e6 : Inf\n\n N = length(valley_chain)\n\n # Initialise SDDP Model\n return m = SDDP.MarkovianPolicyGraph(;\n sense = sense,\n lower_bound = lower,\n upper_bound = upper,\n transition_matrices = transition,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n t, markov_state = node\n\n # ------------------------------------------------------------------\n # SDDP State Variables\n # Level of upper reservoir\n @variable(\n subproblem,\n valley_chain[r].min <= reservoir[r = 1:N] <= valley_chain[r].max,\n SDDP.State,\n initial_value = valley_chain[r].initial\n )\n\n # ------------------------------------------------------------------\n # Additional variables\n @variables(\n subproblem,\n begin\n outflow[r = 1:N] >= 0\n spill[r = 1:N] >= 0\n inflow[r = 1:N] >= 0\n generation_quantity >= 0 # Total quantity of water\n # Proportion of levels to dispatch on\n 0 <=\n dispatch[r = 1:N, level = 1:length(turbine(r).flowknots)] <=\n 1\n rainfall[i = 1:N]\n end\n )\n\n # ------------------------------------------------------------------\n # Constraints\n @constraints(\n subproblem,\n begin\n # flow from upper reservoir\n reservoir[1].out ==\n reservoir[1].in + inflow[1] - outflow[1] - spill[1]\n\n # other flows\n flow[i = 2:N],\n reservoir[i].out ==\n reservoir[i].in + inflow[i] - outflow[i] - spill[i] +\n outflow[i-1] +\n spill[i-1]\n\n # Total quantity generated\n generation_quantity == sum(\n turbine(r).powerknots[level] * dispatch[r, level] for\n r in 1:N for level in 1:length(turbine(r).powerknots)\n )\n\n # ------------------------------------------------------------------\n # Flow out\n turbineflow[r = 1:N],\n outflow[r] == sum(\n turbine(r).flowknots[level] * dispatch[r, level] for\n level in 1:length(turbine(r).flowknots)\n )\n\n # Dispatch combination of levels\n dispatched[r = 1:N],\n sum(\n dispatch[r, level] for\n level in 1:length(turbine(r).flowknots)\n ) <= 1\n end\n )\n\n # rainfall noises\n if hasstagewiseinflows && t > 1 # in future stages random inflows\n @constraint(subproblem, inflow_noise[i = 1:N], inflow[i] <= rainfall[i])\n\n SDDP.parameterize(\n subproblem,\n [\n (valley_chain[1].inflows[i], valley_chain[2].inflows[i]) for i in 1:length(transition)\n ],\n ) do ω\n for i in 1:N\n JuMP.fix(rainfall[i], ω[i])\n end\n end\n else # in the first stage deterministic inflow\n @constraint(\n subproblem,\n initial_inflow_noise[i = 1:N],\n inflow[i] <= valley_chain[i].inflows[1]\n )\n end\n\n # ------------------------------------------------------------------\n # Objective Function\n if hasmarkovprice\n @stageobjective(\n subproblem,\n flipobj * (\n prices[t, markov_state] * generation_quantity -\n sum(valley_chain[i].spill_cost * spill[i] for i in 1:N)\n )\n )\n else\n @stageobjective(\n subproblem,\n flipobj * (\n prices[t, 1] * generation_quantity -\n sum(valley_chain[i].spill_cost * spill[i] for i in 1:N)\n )\n )\n end\n end\nend\n\nfunction test_hydro_valley_model()\n\n # For repeatability\n Random.seed!(11111)\n\n # deterministic\n deterministic_model = hydro_valley_model(;\n hasmarkovprice = false,\n hasstagewiseinflows = false,\n )\n SDDP.train(\n deterministic_model;\n iteration_limit = 10,\n cut_deletion_minimum = 1,\n print_level = 0,\n )\n @test SDDP.calculate_bound(deterministic_model) ≈ 835.0 atol = 1e-3\n\n # stagewise inflows\n stagewise_model = hydro_valley_model(; hasmarkovprice = false)\n SDDP.train(stagewise_model; iteration_limit = 20, print_level = 0)\n @test SDDP.calculate_bound(stagewise_model) ≈ 838.33 atol = 1e-2\n\n # Markov prices\n markov_model = hydro_valley_model(; hasstagewiseinflows = false)\n SDDP.train(markov_model; iteration_limit = 10, print_level = 0)\n @test SDDP.calculate_bound(markov_model) ≈ 851.8 atol = 1e-2\n\n # stagewise inflows and Markov prices\n markov_stagewise_model =\n hydro_valley_model(; hasstagewiseinflows = true, hasmarkovprice = true)\n SDDP.train(markov_stagewise_model; iteration_limit = 10, print_level = 0)\n @test SDDP.calculate_bound(markov_stagewise_model) ≈ 855.0 atol = 1.0\n\n # risk averse stagewise inflows and Markov prices\n riskaverse_model = hydro_valley_model()\n SDDP.train(\n riskaverse_model;\n risk_measure = SDDP.EAVaR(; lambda = 0.5, beta = 0.66),\n iteration_limit = 10,\n print_level = 0,\n )\n @test SDDP.calculate_bound(riskaverse_model) ≈ 828.157 atol = 1.0\n\n # stagewise inflows and Markov prices\n worst_case_model = hydro_valley_model(; sense = :Min)\n SDDP.train(\n worst_case_model;\n risk_measure = SDDP.EAVaR(; lambda = 0.5, beta = 0.0),\n iteration_limit = 10,\n print_level = 0,\n )\n @test SDDP.calculate_bound(worst_case_model) ≈ -780.867 atol = 1.0\n\n # stagewise inflows and Markov prices\n cutselection_model = hydro_valley_model()\n SDDP.train(\n cutselection_model;\n iteration_limit = 10,\n print_level = 0,\n cut_deletion_minimum = 2,\n )\n @test SDDP.calculate_bound(cutselection_model) ≈ 855.0 atol = 1.0\n\n # Distributionally robust Optimization\n dro_model = hydro_valley_model(; hasmarkovprice = false)\n SDDP.train(\n dro_model;\n risk_measure = SDDP.ModifiedChiSquared(sqrt(2 / 3) - 1e-6),\n iteration_limit = 10,\n print_level = 0,\n )\n @test SDDP.calculate_bound(dro_model) ≈ 835.0 atol = 1.0\n\n dro_model = hydro_valley_model(; hasmarkovprice = false)\n SDDP.train(\n dro_model;\n risk_measure = SDDP.ModifiedChiSquared(1 / 6),\n iteration_limit = 20,\n print_level = 0,\n )\n @test SDDP.calculate_bound(dro_model) ≈ 836.695 atol = 1.0\n # (Note) radius ≈ sqrt(2/3), will set all noise probabilities to zero except the worst case noise\n # (Why?):\n # The distance from the uniform distribution (the assumed \"true\" distribution)\n # to a corner of a unit simplex is sqrt(S-1)/sqrt(S) if we have S scenarios. The corner\n # of a unit simplex is just a unit vector, i.e.: [0 ... 0 1 0 ... 0]. With this probability\n # vector, only one noise has a non-zero probablity.\n # In the worst case rhsnoise (0 inflows) the profit is:\n # Reservoir1: 70 * $3 + 70 * $2 + 65 * $1 +\n # Reservoir2: 70 * $3 + 70 * $2 + 70 * $1\n ### = $835\nend\n\ntest_hydro_valley_model()","category":"page"},{"location":"guides/add_noise_in_the_constraint_matrix/#Add-noise-in-the-constraint-matrix","page":"Add noise in the constraint matrix","title":"Add noise in the constraint matrix","text":"","category":"section"},{"location":"guides/add_noise_in_the_constraint_matrix/","page":"Add noise in the constraint matrix","title":"Add noise in the constraint matrix","text":"DocTestSetup = quote\n using SDDP, HiGHS\nend","category":"page"},{"location":"guides/add_noise_in_the_constraint_matrix/","page":"Add noise in the constraint matrix","title":"Add noise in the constraint matrix","text":"SDDP.jl supports coefficients in the constraint matrix through the JuMP.set_normalized_coefficient function.","category":"page"},{"location":"guides/add_noise_in_the_constraint_matrix/","page":"Add noise in the constraint matrix","title":"Add noise in the constraint matrix","text":"julia> model = SDDP.LinearPolicyGraph(\n stages=3, lower_bound = 0, optimizer = HiGHS.Optimizer\n ) do subproblem, t\n @variable(subproblem, x, SDDP.State, initial_value = 0.0)\n @constraint(subproblem, emissions, 1x.out <= 1)\n SDDP.parameterize(subproblem, [0.2, 0.5, 1.0]) do ω\n JuMP.set_normalized_coefficient(emissions, x.out, ω)\n println(emissions)\n end\n @stageobjective(subproblem, -x.out)\n end\nA policy graph with 3 nodes.\n Node indices: 1, 2, 3\n\njulia> SDDP.simulate(model, 1);\nemissions : x_out <= 1\nemissions : 0.2 x_out <= 1\nemissions : 0.5 x_out <= 1","category":"page"},{"location":"guides/add_noise_in_the_constraint_matrix/","page":"Add noise in the constraint matrix","title":"Add noise in the constraint matrix","text":"note: Note\nJuMP will normalize constraints by moving all variables to the left-hand side. Thus, @constraint(model, 0 <= 1 - x.out) becomes x.out <= 1. JuMP.set_normalized_coefficient sets the coefficient on the normalized constraint.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"EditURL = \"risk.jl\"","category":"page"},{"location":"explanation/risk/#Risk-aversion","page":"Risk aversion","title":"Risk aversion","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"In Introductory theory, we implemented a basic version of the SDDP algorithm. This tutorial extends that implementation to add risk-aversion.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Packages","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"This tutorial uses the following packages. For clarity, we call import PackageName so that we must prefix PackageName. to all functions and structs provided by that package. Everything not prefixed is either part of base Julia, or we wrote it.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"import ForwardDiff\nimport HiGHS\nimport Ipopt\nimport JuMP\nimport Statistics","category":"page"},{"location":"explanation/risk/#Risk-aversion:-what-and-why?","page":"Risk aversion","title":"Risk aversion: what and why?","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Often, the agents making decisions in complex systems are risk-averse, that is, they care more about avoiding very bad outcomes, than they do about having a good average outcome.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"As an example, consumers in a hydro-thermal problem may be willing to pay a slightly higher electricity price on average, if it means that there is a lower probability of blackouts.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Risk aversion in multistage stochastic programming has been well studied in the academic literature, and is widely used in production implementations around the world.","category":"page"},{"location":"explanation/risk/#Risk-measures","page":"Risk aversion","title":"Risk measures","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"One way to add risk aversion to models is to use a risk measure. A risk measure is a function that maps a random variable to a real number.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"You are probably already familiar with lots of different risk measures. For example, the mean, median, mode, and maximum are all risk measures.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We call the act of applying a risk measure to a random variable \"computing the risk\" of a random variable.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"To keep things simple, and because we need it for SDDP, we restrict our attention to random variables Z with a finite sample space Omega and positive probabilities p_omega for all omega in Omega. We denote the realizations of Z by Z(omega) = z_omega.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"A risk measure, mathbbFZ, is a convex risk measure if it satisfies the following axioms:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Axiom 1: monotonicity","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Given two random variables Z_1 and Z_2, with Z_1 le Z_2 almost surely, then mathbbFZ_1 le FZ_2.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Axiom 2: translation equivariance","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Given two random variables Z_1 and Z_2, then for all a in mathbbR, mathbbFZ + a = mathbbFZ + a.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Axiom 3: convexity","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Given two random variables Z_1 and Z_2, then for all a in 0 1,","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbFa Z_1 + (1 - a) Z_2 le a mathbbFZ_1 + (1-a)mathbbFZ_2","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Now we know what a risk measure is, let's see how we can use them to form risk-averse decision rules.","category":"page"},{"location":"explanation/risk/#Risk-averse-decision-rules:-Part-I","page":"Risk aversion","title":"Risk-averse decision rules: Part I","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We started this tutorial by explaining that we are interested in risk aversion because some agents are risk-averse. What that really means, is that they want a policy that is also risk-averse. The question then becomes, how do we create risk-averse decision rules and policies?","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Recall from Introductory theory that we can form an optimal decision rule using the recursive formulation:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"beginaligned\nV_i(x omega) = minlimits_barx x^prime u C_i(barx u omega) + mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi)\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x\nendaligned","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"where our decision rule, pi_i(x omega), solves this optimization problem and returns a u^* corresponding to an optimal solution.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"If we can replace the expectation operator mathbbE with another (more risk-averse) risk measure mathbbF, then our decision rule will attempt to choose a control decision now that minimizes the risk of the future costs, as opposed to the expectation of the future costs. This makes our decisions more risk-averse, because we care more about the worst outcomes than we do about the average.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Therefore, we can form a risk-averse decision rule using the formulation:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"beginaligned\nV_i(x omega) = minlimits_barx x^prime u C_i(barx u omega) + mathbbF_j in i^+ varphi in Omega_jV_j(x^prime varphi)\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x\nendaligned","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"To convert this problem into a tractable equivalent, we apply Kelley's algorithm to the risk-averse cost-to-go term mathbbF_j in i^+ varphi in Omega_jV_j(x^prime varphi), to obtain the approximated problem:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"beginaligned\nV_i^K(x omega) = minlimits_barx x^prime u C_i(barx u omega) + theta\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x \n theta ge mathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right + fracddx^primemathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right^top (x^prime - x^prime_k)quad k=1ldotsK\nendaligned","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"warning: Warning\nNote how we need to explicitly compute a risk-averse subgradient! (We need a subgradient because the function might not be differentiable.) When constructing cuts with the expectation operator in Introductory theory, we implicitly used the law of total expectation to combine the two expectations; we can't do that for a general risk measure.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"tip: Homework challenge\nIf it's not obvious why we can use Kelley's here, try to use the axioms of a convex risk measure to show that mathbbF_j in i^+ varphi in Omega_jV_j(x^prime varphi) is a convex function w.r.t. x^prime if V_j is also a convex function.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Our challenge is now to find a way to compute the risk-averse cost-to-go function mathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right, and a way to compute a subgradient of the risk-averse cost-to-go function with respect to x^prime.","category":"page"},{"location":"explanation/risk/#Primal-risk-measures","page":"Risk aversion","title":"Primal risk measures","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Now we know what a risk measure is, and how we will use it, let's implement some code to see how we can compute the risk of some random variables.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"note: Note\nWe're going to start by implementing the primal version of each risk measure. We implement the dual version in the next section.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"First, we need some data:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Z = [1.0, 2.0, 3.0, 4.0]","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"with probabilities:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"p = [0.1, 0.2, 0.4, 0.3]","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We're going to implement a number of different risk measures, so to leverage Julia's multiple dispatch, we create an abstract type:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"abstract type AbstractRiskMeasure end","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"and function to overload:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"\"\"\"\n primal_risk(F::AbstractRiskMeasure, Z::Vector{<:Real}, p::Vector{Float64})\n\nUse `F` to compute the risk of the random variable defined by a vector of costs\n`Z` and non-zero probabilities `p`.\n\"\"\"\nfunction primal_risk end","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"note: Note\nWe want Vector{<:Real} instead of Vector{Float64} because we're going to automatically differentiate this function in the next section.","category":"page"},{"location":"explanation/risk/#Expectation","page":"Risk aversion","title":"Expectation","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The expectation, mathbbE, also called the mean or the average, is the most widely used convex risk measure. The expectation of a random variable is just the sum of Z weighted by the probability:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbFZ = mathbbE_pZ = sumlimits_omegainOmega p_omega z_omega","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"struct Expectation <: AbstractRiskMeasure end\n\nfunction primal_risk(::Expectation, Z::Vector{<:Real}, p::Vector{Float64})\n return sum(p[i] * Z[i] for i in 1:length(p))\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Let's try it out:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"primal_risk(Expectation(), Z, p)","category":"page"},{"location":"explanation/risk/#WorstCase","page":"Risk aversion","title":"WorstCase","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The worst-case risk measure, also called the maximum, is another widely used convex risk measure. This risk measure doesn't care about the probability vector p, only the cost vector Z:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbFZ = maxZ = maxlimits_omegainOmega z_omega","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"struct WorstCase <: AbstractRiskMeasure end\n\nfunction primal_risk(::WorstCase, Z::Vector{<:Real}, ::Vector{Float64})\n return maximum(Z)\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Let's try it out:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"primal_risk(WorstCase(), Z, p)","category":"page"},{"location":"explanation/risk/#Entropic","page":"Risk aversion","title":"Entropic","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"A more interesting, and less widely used risk measure is the entropic risk measure. The entropic risk measure is parameterized by a value gamma 0, and computes the risk of a random variable as:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbF_gammaZ = frac1gammalogleft(mathbbE_pe^gamma Zright) = frac1gammalogleft(sumlimits_omegainOmegap_omega e^gamma z_omegaright)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"tip: Homework challenge\nProve that the entropic risk measure satisfies the three axioms of a convex risk measure.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"struct Entropic <: AbstractRiskMeasure\n γ::Float64\n function Entropic(γ)\n if !(γ > 0)\n throw(DomainError(γ, \"Entropic risk measure must have γ > 0.\"))\n end\n return new(γ)\n end\nend\n\nfunction primal_risk(F::Entropic, Z::Vector{<:Real}, p::Vector{Float64})\n return 1 / F.γ * log(sum(p[i] * exp(F.γ * Z[i]) for i in 1:length(p)))\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"warning: Warning\nexp(x) overflows when x 709. Therefore, if we are passed a vector of Float64, use arbitrary precision arithmetic with big.(Z).","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function primal_risk(F::Entropic, Z::Vector{Float64}, p::Vector{Float64})\n return Float64(primal_risk(F, big.(Z), p))\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Let's try it out for different values of gamma:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"for γ in [0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1_000.0]\n println(\"γ = $(γ), F[Z] = \", primal_risk(Entropic(γ), Z, p))\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"info: Info\nThe entropic has two extremes. As gamma rightarrow 0, the entropic acts like the expectation risk measure, and as gamma rightarrow infty, the entropic acts like the worst-case risk measure.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Computing risk measures this way works well for computing the primal value. However, there isn't an obvious way to compute a subgradient of the risk-averse cost-to-go function, which we need for our cut calculation.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"There is a nice solution to this problem, and that is to use the dual representation of a risk measure, instead of the primal.","category":"page"},{"location":"explanation/risk/#Dual-risk-measures","page":"Risk aversion","title":"Dual risk measures","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Convex risk measures have a dual representation as follows:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbFZ = suplimits_q inmathcalM(p) mathbbE_qZ - alpha(p q)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"where alpha is a concave function that maps the probability vectors p and q to a real number, and mathcalM(p) subseteq mathcalP is a convex subset of the probability simplex:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathcalP = p ge 0sumlimits_omegainOmegap_omega = 1","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The dual of a convex risk measure can be interpreted as taking the expectation of the random variable Z with respect to the worst probability vector q that lies within the set mathcalM, less some concave penalty term alpha(p q).","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"If we define a function dual_risk_inner that computes q and α:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"\"\"\"\n dual_risk_inner(\n F::AbstractRiskMeasure, Z::Vector{Float64}, p::Vector{Float64}\n )::Tuple{Vector{Float64},Float64}\n\nReturn a tuple formed by the worst-case probability vector `q` and the\ncorresponding evaluation `α(p, q)`.\n\"\"\"\nfunction dual_risk_inner end","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"then we can write a generic dual_risk function as:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function dual_risk(\n F::AbstractRiskMeasure,\n Z::Vector{Float64},\n p::Vector{Float64},\n)\n q, α = dual_risk_inner(F, Z, p)\n return sum(q[i] * Z[i] for i in 1:length(q)) - α\nend","category":"page"},{"location":"explanation/risk/#Expectation-2","page":"Risk aversion","title":"Expectation","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"For the expectation risk measure, mathcalM(p) = p, and alpha(cdot cdot) = 0. Therefore:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function dual_risk_inner(::Expectation, ::Vector{Float64}, p::Vector{Float64})\n return p, 0.0\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We can check we get the same result as the primal version:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"dual_risk(Expectation(), Z, p) == primal_risk(Expectation(), Z, p)","category":"page"},{"location":"explanation/risk/#Worst-case","page":"Risk aversion","title":"Worst-case","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"For the worst-case risk measure, mathcalM(p) = mathcalP, and alpha(cdot cdot) = 0. Therefore, the dual representation just puts all of the probability weight on the maximum outcome:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function dual_risk_inner(::WorstCase, Z::Vector{Float64}, ::Vector{Float64})\n q = zeros(length(Z))\n _, index = findmax(Z)\n q[index] = 1.0\n return q, 0.0\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We can check we get the same result as the primal version:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"dual_risk(WorstCase(), Z, p) == primal_risk(WorstCase(), Z, p)","category":"page"},{"location":"explanation/risk/#Entropic-2","page":"Risk aversion","title":"Entropic","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"For the entropic risk measure, mathcalM(p) = mathcalP, and:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"alpha(p q) = frac1gammasumlimits_omegainOmega q_omega logleft(fracq_omegap_omegaright)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"One way to solve the dual problem is to explicitly solve a nonlinear optimization problem:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function dual_risk_inner(F::Entropic, Z::Vector{Float64}, p::Vector{Float64})\n N = length(p)\n model = JuMP.Model(Ipopt.Optimizer)\n JuMP.set_silent(model)\n # For this problem, the solve is more accurate if we turn off problem\n # scaling.\n JuMP.set_optimizer_attribute(model, \"nlp_scaling_method\", \"none\")\n JuMP.@variable(model, 0 <= q[1:N] <= 1)\n JuMP.@constraint(model, sum(q) == 1)\n JuMP.@NLexpression(\n model,\n α,\n 1 / F.γ * sum(q[i] * log(q[i] / p[i]) for i in 1:N),\n )\n JuMP.@NLobjective(model, Max, sum(q[i] * Z[i] for i in 1:N) - α)\n JuMP.optimize!(model)\n return JuMP.value.(q), JuMP.value(α)\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We can check we get the same result as the primal version:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"for γ in [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]\n primal = primal_risk(Entropic(γ), Z, p)\n dual = dual_risk(Entropic(γ), Z, p)\n success = primal ≈ dual ? \"✓\" : \"×\"\n println(\"$(success) γ = $(γ), primal = $(primal), dual = $(dual)\")\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"info: Info\nThis method of solving the dual problem \"on-the-side\" is used by SDDP.jl for a number of risk measures, including a distributionally robust risk measure with the Wasserstein distance. Check out all the risk measures that SDDP.jl supports in Add a risk measure.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The \"on-the-side\" method is very general, and it lets us incorporate any convex risk measure into SDDP. However, this comes at an increased computational cost and potential numerical issues (e.g., not converging to the exact solution).","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"However, for the entropic risk measure, Dowson, Morton, and Pagnoncelli (2020) derive the following closed form solution for q^*:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"q_omega^* = fracp_omega e^gamma z_omegasumlimits_varphi in Omega p_varphi e^gamma z_varphi","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"This is faster because we don't need to use Ipopt, and it avoids some of the numerical issues associated with solving a nonlinear program.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function dual_risk_inner(F::Entropic, Z::Vector{Float64}, p::Vector{Float64})\n q, α = zeros(length(p)), big(0.0)\n peγz = p .* exp.(F.γ .* big.(Z))\n sum_peγz = sum(peγz)\n for i in 1:length(q)\n big_q = peγz[i] / sum_peγz\n α += big_q * log(big_q / p[i])\n q[i] = Float64(big_q)\n end\n return q, Float64(α / F.γ)\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"warning: Warning\nAgain, note that we use big to avoid introducing overflow errors, before explicitly casting back to Float64 for the values we return.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We can check we get the same result as the primal version:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"for γ in [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]\n primal = primal_risk(Entropic(γ), Z, p)\n dual = dual_risk(Entropic(γ), Z, p)\n success = primal ≈ dual ? \"✓\" : \"×\"\n println(\"$(success) γ = $(γ), primal = $(primal), dual = $(dual)\")\nend","category":"page"},{"location":"explanation/risk/#Risk-averse-subgradients","page":"Risk aversion","title":"Risk-averse subgradients","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We ended the section on primal risk measures by explaining how we couldn't use the primal risk measure in the cut calculation because we needed some way of computing a risk-averse subgradient:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"theta ge mathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right + fracddx^primemathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right^top (x^prime - x^prime_k)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The reason we use the dual representation is because of the following theorem, which explains how to compute a risk-averse gradient.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"info: The risk-averse subgradient theorem\nLet omega in Omega index a random vector with finite support and with nominal probability mass function, p in mathcalP, which satisfies p 0.Consider a convex risk measure, mathbbF, with a convex risk set, mathcalM(p), so that mathbbF can be expressed as the dual form.Let V(xomega) be convex with respect to x for all fixed omegainOmega, and let lambda(tildex omega) be a subgradient of V(xomega) with respect to x at x = tildex for each omega in Omega.Then, sum_omegainOmegaq^*_omega lambda(tildexomega) is a subgradient of mathbbFV(xomega) at tildex, whereq^* in argmax_q in mathcalM(p)leftmathbbE_qV(tildexomega) - alpha(p q)right","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"This theorem can be a little hard to unpack, so let's see an example:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function dual_risk_averse_subgradient(\n V::Function,\n # Use automatic differentiation to compute the gradient of V w.r.t. x,\n # given a fixed ω.\n λ::Function = (x, ω) -> ForwardDiff.gradient(x -> V(x, ω), x);\n F::AbstractRiskMeasure,\n Ω::Vector,\n p::Vector{Float64},\n x̃::Vector{Float64},\n)\n # Evaluate the function at x=x̃ for all ω ∈ Ω.\n V_ω = [V(x̃, ω) for ω in Ω]\n # Solve the dual problem to obtain an optimal q^*.\n q, α = dual_risk_inner(F, V_ω, p)\n # Compute the risk-averse subgradient by taking the expectation of the\n # subgradients w.r.t. q^*.\n dVdx = sum(q[i] * λ(x̃, ω) for (i, ω) in enumerate(Ω))\n return dVdx\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We can compare the subgradient obtained with the dual form against the automatic differentiation of the primal_risk function.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function primal_risk_averse_subgradient(\n V::Function;\n F::AbstractRiskMeasure,\n Ω::Vector,\n p::Vector{Float64},\n x̃::Vector{Float64},\n)\n inner(x) = primal_risk(F, [V(x, ω) for ω in Ω], p)\n return ForwardDiff.gradient(inner, x̃)\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"As our example function, we use:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"V(x, ω) = ω * x[1]^2","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"with:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Ω = [1.0, 2.0, 3.0]","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"and:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"p = [0.3, 0.4, 0.3]","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"at the point:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"x̃ = [3.0]","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"If mathbbF is the expectation risk-measure, then:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbFV(x omega) = 2 x^2","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The function evaluation x=3 is 18 and the subgradient is 12. Let's check we get it right with the dual form:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"dual_risk_averse_subgradient(V; F = Expectation(), Ω = Ω, p = p, x̃ = x̃)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"and the primal form:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"primal_risk_averse_subgradient(V; F = Expectation(), Ω = Ω, p = p, x̃ = x̃)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"If mathbbF is the worst-case risk measure, then:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbFV(x omega) = 3 x^2","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The function evaluation at x=3 is 27, and the subgradient is 18. Let's check we get it right with the dual form:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"dual_risk_averse_subgradient(V; F = WorstCase(), Ω = Ω, p = p, x̃ = x̃)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"and the primal form:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"primal_risk_averse_subgradient(V; F = WorstCase(), Ω = Ω, p = p, x̃ = x̃)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"If mathbbF is the entropic risk measure, the math is a little more difficult to derive analytically. However, we can check against our primal version:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"for γ in [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]\n dual =\n dual_risk_averse_subgradient(V; F = Entropic(γ), Ω = Ω, p = p, x̃ = x̃)\n primal = primal_risk_averse_subgradient(\n V;\n F = Entropic(γ),\n Ω = Ω,\n p = p,\n x̃ = x̃,\n )\n success = primal ≈ dual ? \"✓\" : \"×\"\n println(\"$(success) γ = $(γ), primal = $(primal), dual = $(dual)\")\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Uh oh! What happened with the last line? It looks our primal_risk_averse_subgradient encountered an error and returned a subgradient of NaN. This is because of the overflow issue with exp(x). However, we can be confident that our dual method of computing the risk-averse subgradient is both correct and more numerically robust than the primal version.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"info: Info\nAs another sanity check, notice how as gamma rightarrow 0, we tend toward the solution of the expectation risk-measure [12], and as gamma rightarrow infty, we tend toward the solution of the worse-case risk measure [18].","category":"page"},{"location":"explanation/risk/#Risk-averse-decision-rules:-Part-II","page":"Risk aversion","title":"Risk-averse decision rules: Part II","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Why is the risk-averse subgradient theorem helpful? Using the dual representation of a convex risk measure, we can re-write the cut:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"theta ge mathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right + fracddx^primemathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right^top (x^prime - x^prime_k)quad k=1ldotsK","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"as:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"theta ge mathbbE_q_kleftV_j^k(x^prime_k varphi) + fracddx^primeV_j^k(x^prime_k varphi)^top (x^prime - x^prime_k)right - alpha(p q_k)quad k=1ldotsK","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"where q_k = mathrmargsuplimits_q inmathcalM(p) mathbbE_qV_j^k(x_k^prime varphi) - alpha(p q).","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Therefore, we can formulate a risk-averse decision rule as:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"beginaligned\nV_i^K(x omega) = minlimits_barx x^prime u C_i(barx u omega) + theta\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x \n theta ge mathbbE_q_kleftV_j^k(x^prime_k varphi) + fracddx^primeV_j^k(x^prime_k varphi)^top (x^prime - x^prime_k)right - alpha(p q_k)quad k=1ldotsK \n theta ge M\nendaligned","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"where q_k = mathrmargsuplimits_q inmathcalM(p) mathbbE_qV_j^k(x_k^prime varphi) - alpha(p q).","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Thus, to implement risk-averse SDDP, all we need to do is modify the backward pass to include this calculation of q_k, form the cut using q_k instead of p, and subtract the penalty term alpha(p q_k).","category":"page"},{"location":"explanation/risk/#Implementation","page":"Risk aversion","title":"Implementation","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Now we're ready to implement our risk-averse version of SDDP.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"As a prerequisite, we need most of the code from Introductory theory.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"

\nClick to view code from the tutorial \"Introductory theory\".","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"struct State\n in::JuMP.VariableRef\n out::JuMP.VariableRef\nend\n\nstruct Uncertainty\n parameterize::Function\n Ω::Vector{Any}\n P::Vector{Float64}\nend\n\nstruct Node\n subproblem::JuMP.Model\n states::Dict{Symbol,State}\n uncertainty::Uncertainty\n cost_to_go::JuMP.VariableRef\nend\n\nstruct PolicyGraph\n nodes::Vector{Node}\n arcs::Vector{Dict{Int,Float64}}\nend\n\nfunction Base.show(io::IO, model::PolicyGraph)\n println(io, \"A policy graph with $(length(model.nodes)) nodes\")\n println(io, \"Arcs:\")\n for (from, arcs) in enumerate(model.arcs)\n for (to, probability) in arcs\n println(io, \" $(from) => $(to) w.p. $(probability)\")\n end\n end\n return\nend\n\nfunction PolicyGraph(\n subproblem_builder::Function;\n graph::Vector{Dict{Int,Float64}},\n lower_bound::Float64,\n optimizer,\n)\n nodes = Node[]\n for t in 1:length(graph)\n model = JuMP.Model(optimizer)\n states, uncertainty = subproblem_builder(model, t)\n JuMP.@variable(model, cost_to_go >= lower_bound)\n obj = JuMP.objective_function(model)\n JuMP.@objective(model, Min, obj + cost_to_go)\n if length(graph[t]) == 0\n JuMP.fix(cost_to_go, 0.0; force = true)\n end\n push!(nodes, Node(model, states, uncertainty, cost_to_go))\n end\n return PolicyGraph(nodes, graph)\nend\n\nfunction sample_uncertainty(uncertainty::Uncertainty)\n r = rand()\n for (p, ω) in zip(uncertainty.P, uncertainty.Ω)\n r -= p\n if r < 0.0\n return ω\n end\n end\n return error(\"We should never get here because P should sum to 1.0.\")\nend\n\nfunction sample_next_node(model::PolicyGraph, current::Int)\n if length(model.arcs[current]) == 0\n return nothing\n else\n r = rand()\n for (to, probability) in model.arcs[current]\n r -= probability\n if r < 0.0\n return to\n end\n end\n return nothing\n end\nend\n\nfunction forward_pass(model::PolicyGraph, io::IO = stdout)\n incoming_state =\n Dict(k => JuMP.fix_value(v.in) for (k, v) in model.nodes[1].states)\n simulation_cost = 0.0\n trajectory = Tuple{Int,Dict{Symbol,Float64}}[]\n t = 1\n while t !== nothing\n node = model.nodes[t]\n ω = sample_uncertainty(node.uncertainty)\n node.uncertainty.parameterize(ω)\n for (k, v) in incoming_state\n JuMP.fix(node.states[k].in, v; force = true)\n end\n JuMP.optimize!(node.subproblem)\n if JuMP.termination_status(node.subproblem) != JuMP.MOI.OPTIMAL\n error(\"Something went terribly wrong!\")\n end\n outgoing_state = Dict(k => JuMP.value(v.out) for (k, v) in node.states)\n stage_cost =\n JuMP.objective_value(node.subproblem) - JuMP.value(node.cost_to_go)\n simulation_cost += stage_cost\n incoming_state = outgoing_state\n push!(trajectory, (t, outgoing_state))\n t = sample_next_node(model, t)\n end\n return trajectory, simulation_cost\nend\n\nfunction upper_bound(model::PolicyGraph; replications::Int)\n simulations = [forward_pass(model, devnull) for i in 1:replications]\n z = [s[2] for s in simulations]\n μ = Statistics.mean(z)\n tσ = 1.96 * Statistics.std(z) / sqrt(replications)\n return μ, tσ\nend\n\nfunction lower_bound(model::PolicyGraph)\n node = model.nodes[1]\n bound = 0.0\n for (p, ω) in zip(node.uncertainty.P, node.uncertainty.Ω)\n node.uncertainty.parameterize(ω)\n JuMP.optimize!(node.subproblem)\n bound += p * JuMP.objective_value(node.subproblem)\n end\n return bound\nend\n\nfunction evaluate_policy(\n model::PolicyGraph;\n node::Int,\n incoming_state::Dict{Symbol,Float64},\n random_variable,\n)\n the_node = model.nodes[node]\n the_node.uncertainty.parameterize(random_variable)\n for (k, v) in incoming_state\n JuMP.fix(the_node.states[k].in, v; force = true)\n end\n JuMP.optimize!(the_node.subproblem)\n return Dict(\n k => JuMP.value.(v) for\n (k, v) in JuMP.object_dictionary(the_node.subproblem)\n )\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"

","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"First, we need to modify the backward pass to compute the cuts using the risk-averse subgradient theorem:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function backward_pass(\n model::PolicyGraph,\n trajectory::Vector{Tuple{Int,Dict{Symbol,Float64}}},\n io::IO = stdout;\n risk_measure::AbstractRiskMeasure,\n)\n println(io, \"| Backward pass\")\n for i in reverse(1:length(trajectory))\n index, outgoing_states = trajectory[i]\n node = model.nodes[index]\n println(io, \"| | Visiting node $(index)\")\n if length(model.arcs[index]) == 0\n continue\n end\n # =====================================================================\n # New! Create vectors to store the cut expressions, V(x,ω) and p:\n cut_expressions, V_ω, p = JuMP.AffExpr[], Float64[], Float64[]\n # =====================================================================\n for (j, P_ij) in model.arcs[index]\n next_node = model.nodes[j]\n for (k, v) in outgoing_states\n JuMP.fix(next_node.states[k].in, v; force = true)\n end\n for (pφ, φ) in zip(next_node.uncertainty.P, next_node.uncertainty.Ω)\n next_node.uncertainty.parameterize(φ)\n JuMP.optimize!(next_node.subproblem)\n V = JuMP.objective_value(next_node.subproblem)\n dVdx = Dict(\n k => JuMP.reduced_cost(v.in) for (k, v) in next_node.states\n )\n # =============================================================\n # New! Construct and append the expression\n # `V_j^K(x_k, φ) + dVdx_j^K(x'_k, φ)ᵀ(x - x_k)` to the list of\n # cut expressions.\n push!(\n cut_expressions,\n JuMP.@expression(\n node.subproblem,\n V + sum(\n dVdx[k] * (x.out - outgoing_states[k]) for\n (k, x) in node.states\n ),\n )\n )\n # Add the objective value to Z:\n push!(V_ω, V)\n # Add the probability to p:\n push!(p, P_ij * pφ)\n # =============================================================\n end\n end\n # =====================================================================\n # New! Using the solutions in V_ω, compute q and α:\n q, α = dual_risk_inner(risk_measure, V_ω, p)\n println(io, \"| | | Z = \", Z)\n println(io, \"| | | p = \", p)\n println(io, \"| | | q = \", q)\n println(io, \"| | | α = \", α)\n # Then add the cut:\n c = JuMP.@constraint(\n node.subproblem,\n node.cost_to_go >=\n sum(q[i] * cut_expressions[i] for i in 1:length(q)) - α\n )\n # =====================================================================\n println(io, \"| | | Adding cut : \", c)\n end\n return nothing\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We also need to update the train loop of SDDP to pass a risk measure to the backward pass:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function train(\n model::PolicyGraph;\n iteration_limit::Int,\n replications::Int,\n # =========================================================================\n # New! Add a risk_measure argument\n risk_measure::AbstractRiskMeasure,\n # =========================================================================\n io::IO = stdout,\n)\n for i in 1:iteration_limit\n println(io, \"Starting iteration $(i)\")\n outgoing_states, _ = forward_pass(model, io)\n # =====================================================================\n # New! Pass the risk measure to the backward pass.\n backward_pass(model, outgoing_states, io; risk_measure = risk_measure)\n # =====================================================================\n println(io, \"| Finished iteration\")\n println(io, \"| | lower_bound = \", lower_bound(model))\n end\n μ, tσ = upper_bound(model; replications = replications)\n println(io, \"Upper bound = $(μ) ± $(tσ)\")\n return\nend","category":"page"},{"location":"explanation/risk/#Risk-averse-bounds","page":"Risk aversion","title":"Risk-averse bounds","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"warning: Warning\nThis section is important.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"When we had a risk-neutral policy (i.e., we only used the expectation risk measure), we discussed how we could form valid lower and upper bounds.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The upper bound is still valid as a Monte Carlo simulation of the expected cost of the policy. (Although this upper bound doesn't capture the change in the policy we wanted to achieve, namely that the impact of the worst outcomes were reduced.)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"However, if we use a different risk measure, the lower bound is no longer valid!","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We can still calculate a \"lower bound\" as the objective of the first-stage approximated subproblem, and this will converge to a finite value. However, we can't meaningfully interpret it as a bound with respect to the optimal policy. Therefore, it's best to just ignore the lower bound when training a risk-averse policy.","category":"page"},{"location":"explanation/risk/#Example:-risk-averse-hydro-thermal-scheduling","page":"Risk aversion","title":"Example: risk-averse hydro-thermal scheduling","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Now it's time for an example. We create the same problem as Introductory theory:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"model = PolicyGraph(;\n graph = [Dict(2 => 1.0), Dict(3 => 1.0), Dict{Int,Float64}()],\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, t\n JuMP.set_silent(subproblem)\n JuMP.@variable(subproblem, volume_in == 200)\n JuMP.@variable(subproblem, 0 <= volume_out <= 200)\n states = Dict(:volume => State(volume_in, volume_out))\n JuMP.@variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n JuMP.@constraints(\n subproblem,\n begin\n volume_out == volume_in + inflow - hydro_generation - hydro_spill\n demand_constraint, thermal_generation + hydro_generation == 150.0\n end\n )\n fuel_cost = [50.0, 100.0, 150.0]\n JuMP.@objective(subproblem, Min, fuel_cost[t] * thermal_generation)\n uncertainty =\n Uncertainty([0.0, 50.0, 100.0], [1 / 3, 1 / 3, 1 / 3]) do ω\n return JuMP.fix(inflow, ω)\n end\n return states, uncertainty\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Then we train a risk-averse policy, passing a risk measure to train:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"train(\n model;\n iteration_limit = 3,\n replications = 100,\n risk_measure = Entropic(1.0),\n)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Finally, evaluate the decision rule:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"evaluate_policy(\n model;\n node = 1,\n incoming_state = Dict(:volume => 150.0),\n random_variable = 75,\n)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"info: Info\nFor this trivial example, the risk-averse policy isn't very different from the policy obtained using the expectation risk-measure. If you try it on some bigger/more interesting problems, you should see the expected cost increase, and the upper tail of the policy decrease.","category":"page"}] +[{"location":"guides/create_a_general_policy_graph/#Create-a-general-policy-graph","page":"Create a general policy graph","title":"Create a general policy graph","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"DocTestSetup = quote\n using SDDP, HiGHS\nend","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"SDDP.jl uses the concept of a policy graph to formulate multistage stochastic programming problems. For more details, read An introduction to SDDP.jl or the paper Dowson, O., (2020). The policy graph decomposition of multistage stochastic optimization problems. Networks, 76(1), 3-23. doi.","category":"page"},{"location":"guides/create_a_general_policy_graph/#Creating-a-[SDDP.Graph](@ref)","page":"Create a general policy graph","title":"Creating a SDDP.Graph","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/#Linear-graphs","page":"Create a general policy graph","title":"Linear graphs","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Linear policy graphs can be created using the SDDP.LinearGraph function.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"julia> graph = SDDP.LinearGraph(3)\nRoot\n 0\nNodes\n 1\n 2\n 3\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 3 w.p. 1.0","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"We can add nodes to a graph using SDDP.add_node and edges using SDDP.add_edge.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"julia> SDDP.add_node(graph, 4)\n\njulia> SDDP.add_edge(graph, 3 => 4, 1.0)\n\njulia> SDDP.add_edge(graph, 4 => 1, 0.9)\n\njulia> graph\nRoot\n 0\nNodes\n 1\n 2\n 3\n 4\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 3 w.p. 1.0\n 3 => 4 w.p. 1.0\n 4 => 1 w.p. 0.9","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Look! We just made a cyclic graph! SDDP.jl can solve infinite horizon problems. The probability on the arc that completes a cycle should be interpreted as a discount factor.","category":"page"},{"location":"guides/create_a_general_policy_graph/#guide_unicyclic_policy_graph","page":"Create a general policy graph","title":"Unicyclic policy graphs","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Linear policy graphs with a single infinite-horizon cycle can be created using the SDDP.UnicyclicGraph function.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"julia> SDDP.UnicyclicGraph(0.95; num_nodes = 2)\nRoot\n 0\nNodes\n 1\n 2\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 1 w.p. 0.95","category":"page"},{"location":"guides/create_a_general_policy_graph/#guide_markovian_policy_graph","page":"Create a general policy graph","title":"Markovian policy graphs","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Markovian policy graphs can be created using the SDDP.MarkovianGraph function.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"julia> SDDP.MarkovianGraph(Matrix{Float64}[[1.0]', [0.4 0.6]])\nRoot\n (0, 1)\nNodes\n (1, 1)\n (2, 1)\n (2, 2)\nArcs\n (0, 1) => (1, 1) w.p. 1.0\n (1, 1) => (2, 1) w.p. 0.4\n (1, 1) => (2, 2) w.p. 0.6","category":"page"},{"location":"guides/create_a_general_policy_graph/#General-graphs","page":"Create a general policy graph","title":"General graphs","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Arbitrarily complicated graphs can be constructed using SDDP.Graph, SDDP.add_node and SDDP.add_edge. For example","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"julia> graph = SDDP.Graph(:root_node)\nRoot\n root_node\nNodes\n {}\nArcs\n {}\n\njulia> SDDP.add_node(graph, :decision_node)\n\njulia> SDDP.add_edge(graph, :root_node => :decision_node, 1.0)\n\njulia> SDDP.add_edge(graph, :decision_node => :decision_node, 0.9)\n\njulia> graph\nRoot\n root_node\nNodes\n decision_node\nArcs\n root_node => decision_node w.p. 1.0\n decision_node => decision_node w.p. 0.9","category":"page"},{"location":"guides/create_a_general_policy_graph/#Creating-a-policy-graph","page":"Create a general policy graph","title":"Creating a policy graph","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Once you have constructed an instance of SDDP.Graph, you can create a policy graph by passing the graph as the first argument.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"julia> graph = SDDP.Graph(\n :root_node,\n [:decision_node],\n [\n (:root_node => :decision_node, 1.0),\n (:decision_node => :decision_node, 0.9)\n ]);\n\njulia> model = SDDP.PolicyGraph(\n graph,\n lower_bound = 0,\n optimizer = HiGHS.Optimizer) do subproblem, node\n println(\"Called from node: \", node)\n end;\nCalled from node: decision_node","category":"page"},{"location":"guides/create_a_general_policy_graph/#Special-cases","page":"Create a general policy graph","title":"Special cases","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"There are two special cases which cover the majority of models in the literature.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"SDDP.LinearPolicyGraph is a special case where a SDDP.LinearGraph is passed as the first argument.\nSDDP.MarkovianPolicyGraph is a special case where a SDDP.MarkovianGraph is passed as the first argument.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Note that the type of the names of all nodes (including the root node) must be the same. In this case, they are Symbols.","category":"page"},{"location":"guides/create_a_general_policy_graph/#Simulating-non-standard-policy-graphs","page":"Create a general policy graph","title":"Simulating non-standard policy graphs","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"If you simulate a policy graph with a node that has outgoing arcs that sum to less than one, you will end up with simulations of different lengths. (The most common case is an infinite horizon stochastic program, aka a linear policy graph with a single cycle.)","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"To simulate a fixed number of stages, use:","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"simulations = SDDP.simulate(\n model,\n 1,\n sampling_scheme = SDDP.InSampleMonteCarlo(\n max_depth = 10,\n terminate_on_dummy_leaf = false\n )\n)","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"Here, max_depth controls the number of stages, and terminate_on_dummy_leaf = false stops us from terminating early.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"See also Simulate using a different sampling scheme.","category":"page"},{"location":"guides/create_a_general_policy_graph/#Creating-a-Markovian-graph-automatically","page":"Create a general policy graph","title":"Creating a Markovian graph automatically","text":"","category":"section"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"SDDP.jl can create a Markovian graph by automatically discretizing a one-dimensional stochastic process and fitting a Markov chain.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"To access this functionality, pass a function that takes no arguments and returns a Vector{Float64} to SDDP.MarkovianGraph. To keyword arguments also need to be provided: budget is the total number of nodes in the Markovian graph, and scenarios is the number of realizations of the simulator function used to approximate the graph.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"In some cases, scenarios may be too small to provide a reasonable fit of the stochastic process. If so, SDDP.jl will automatically try to re-fit the Markov chain using more scenarios.","category":"page"},{"location":"guides/create_a_general_policy_graph/","page":"Create a general policy graph","title":"Create a general policy graph","text":"function simulator()\n scenario = zeros(5)\n for i = 2:5\n scenario[i] = scenario[i - 1] + rand() - 0.5\n end\n return scenario\nend\n\nmodel = SDDP.PolicyGraph(\n SDDP.MarkovianGraph(simulator; budget = 10, scenarios = 100),\n sense = :Max,\n upper_bound = 1e3\n) do subproblem, node\n (stage, price) = node\n @variable(subproblem, x >= 0, SDDP.State, initial_value = 1)\n @constraint(subproblem, x.out <= x.in)\n @stageobjective(subproblem, price * x.out)\nend","category":"page"},{"location":"guides/debug_a_model/#Debug-a-model","page":"Debug a model","title":"Debug a model","text":"","category":"section"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"Building multistage stochastic programming models is hard. There are a lot of different pieces that need to be put together, and we typically have no idea of the optimal policy, so it can be hard (impossible?) to validate the solution.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"That said, here are a few tips to verify and validate models built using SDDP.jl.","category":"page"},{"location":"guides/debug_a_model/#Writing-subproblems-to-file","page":"Debug a model","title":"Writing subproblems to file","text":"","category":"section"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"The first step to debug a model is to write out the subproblems to a file in order to check that you are actually building what you think you are building.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"This can be achieved with the help of two functions: SDDP.parameterize and SDDP.write_subproblem_to_file. The first lets you parameterize a node given a noise, and the second writes out the subproblem to a file.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"Here is an example model:","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"using SDDP, HiGHS\n\nmodel = SDDP.LinearPolicyGraph(\n stages = 2,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, t\n @variable(subproblem, x, SDDP.State, initial_value = 1)\n @variable(subproblem, y)\n @constraint(subproblem, balance, x.in == x.out + y)\n SDDP.parameterize(subproblem, [1.1, 2.2]) do ω\n @stageobjective(subproblem, ω * x.out)\n JuMP.fix(y, ω)\n end\nend\n\n# output\n\nA policy graph with 2 nodes.\n Node indices: 1, 2","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"Initially, model hasn't been parameterized with a concrete realizations of ω. Let's do so now by parameterizing the first subproblem with ω=1.1.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"julia> SDDP.parameterize(model[1], 1.1)","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"Easy! To parameterize the second stage problem, we would have used model[2].","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"Now to write out the problem to a file. We'll get a few warnings because some variables and constraints don't have names. They don't matter, so ignore them.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"julia> SDDP.write_subproblem_to_file(model[1], \"subproblem.lp\")\n\njulia> read(\"subproblem.lp\") |> String |> print\nminimize\nobj: 1.1 x_out + 1 x4\nsubject to\nbalance: 1 x_in - 1 x_out - 1 y = 0\nBounds\nx_in free\nx_out free\ny = 1.1\nx4 >= 0\nEnd","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"It is easy to see that ω has been set in the objective, and as the fixed value for y.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"It is also possible to parameterize the subproblems using values for ω that are not in the original problem formulation.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"julia> SDDP.parameterize(model[1], 3.3)\n\njulia> SDDP.write_subproblem_to_file(model[1], \"subproblem.lp\")\n\njulia> read(\"subproblem.lp\") |> String |> print\nminimize\nobj: 3.3 x_out + 1 x4\nsubject to\nbalance: 1 x_in - 1 x_out - 1 y = 0\nBounds\nx_in free\nx_out free\ny = 3.3\nx4 >= 0\nEnd\n\njulia> rm(\"subproblem.lp\") # Clean up.","category":"page"},{"location":"guides/debug_a_model/#Solve-the-deterministic-equivalent","page":"Debug a model","title":"Solve the deterministic equivalent","text":"","category":"section"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"Sometimes, it can be helpful to solve the deterministic equivalent of a problem in order to obtain an exact solution to the problem. To obtain a JuMP model that represents the deterministic equivalent, use SDDP.deterministic_equivalent. The returned model is just a normal JuMP model. Use JuMP to optimize it and query the solution.","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"julia> det_equiv = SDDP.deterministic_equivalent(model, HiGHS.Optimizer)\nA JuMP Model\n├ solver: HiGHS\n├ objective_sense: MIN_SENSE\n│ └ objective_function_type: AffExpr\n├ num_variables: 24\n├ num_constraints: 28\n│ ├ AffExpr in MOI.EqualTo{Float64}: 10\n│ ├ VariableRef in MOI.EqualTo{Float64}: 8\n│ ├ VariableRef in MOI.GreaterThan{Float64}: 6\n│ └ VariableRef in MOI.LessThan{Float64}: 4\n└ Names registered in the model: none\n\njulia> set_silent(det_equiv)\n\njulia> optimize!(det_equiv)\n\njulia> objective_value(det_equiv)\n-5.472500000000001","category":"page"},{"location":"guides/debug_a_model/","page":"Debug a model","title":"Debug a model","text":"warning: Warning\nThe deterministic equivalent scales poorly with problem size. Only use this on small problems!","category":"page"},{"location":"guides/add_multidimensional_noise/#Add-multi-dimensional-noise-terms","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"","category":"section"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"DocTestSetup = quote\n using SDDP, HiGHS\nend","category":"page"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"Multi-dimensional stagewise-independent random variables can be created by forming the Cartesian product of the random variables.","category":"page"},{"location":"guides/add_multidimensional_noise/#A-simple-example","page":"Add multi-dimensional noise terms","title":"A simple example","text":"","category":"section"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"If the sample space and probabilities are given as vectors for each marginal distribution, do:","category":"page"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"julia> model = SDDP.LinearPolicyGraph(\n stages = 3,\n lower_bound = 0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, t\n @variable(subproblem, x, SDDP.State, initial_value = 0.0)\n Ω = [(value = v, coefficient = c) for v in [1, 2] for c in [3, 4, 5]]\n P = [v * c for v in [0.5, 0.5] for c in [0.3, 0.5, 0.2]]\n SDDP.parameterize(subproblem, Ω, P) do ω\n JuMP.fix(x.out, ω.value)\n @stageobjective(subproblem, ω.coefficient * x.out)\n println(\"ω is: \", ω)\n end\n end;\n\njulia> SDDP.simulate(model, 1);\nω is: (value = 1, coefficient = 4)\nω is: (value = 1, coefficient = 3)\nω is: (value = 2, coefficient = 4)","category":"page"},{"location":"guides/add_multidimensional_noise/#Using-Distributions.jl","page":"Add multi-dimensional noise terms","title":"Using Distributions.jl","text":"","category":"section"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"For sampling multidimensional random variates, it can be useful to use the Product type from Distributions.jl.","category":"page"},{"location":"guides/add_multidimensional_noise/#Finite-discrete-distributions","page":"Add multi-dimensional noise terms","title":"Finite discrete distributions","text":"","category":"section"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"There are several ways to go about this. If the sample space is finite, and small enough that it makes sense to enumerate each element, we can use Base.product and Distributions.support to generate the entire sample space Ω from each of the marginal distributions. ","category":"page"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"We can then evaluate the density function of the product distribution on each element of this space to get the vector of corresponding probabilities, P.","category":"page"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"julia> import Distributions\n\njulia> distributions = [\n Distributions.Binomial(10, 0.5),\n Distributions.Bernoulli(0.5),\n Distributions.truncated(Distributions.Poisson(5), 2, 8)\n ];\n\njulia> supports = Distributions.support.(distributions);\n\njulia> Ω = vec([collect(ω) for ω in Base.product(supports...)]);\n\njulia> P = [Distributions.pdf(Distributions.Product(distributions), ω) for ω in Ω];\n\njulia> model = SDDP.LinearPolicyGraph(\n stages = 3,\n lower_bound = 0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, t\n @variable(subproblem, x, SDDP.State, initial_value = 0.0)\n SDDP.parameterize(subproblem, Ω, P) do ω\n JuMP.fix(x.out, ω[1])\n @stageobjective(subproblem, ω[2] * x.out + ω[3])\n println(\"ω is: \", ω)\n end\n end;\n\njulia> SDDP.simulate(model, 1);\nω is: [10, 0, 3]\nω is: [0, 1, 6]\nω is: [6, 0, 5]","category":"page"},{"location":"guides/add_multidimensional_noise/#Sampling","page":"Add multi-dimensional noise terms","title":"Sampling","text":"","category":"section"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"For sample spaces that are too large to explicitly represent, we can instead approximate the distribution by a sample of N points. Now Ω is a sample from the full sample space, and P is the uniform distribution over those points. Points with higher density in the full sample space will appear more frequently in Ω.","category":"page"},{"location":"guides/add_multidimensional_noise/","page":"Add multi-dimensional noise terms","title":"Add multi-dimensional noise terms","text":"julia> import Distributions\n\njulia> distributions = Distributions.Product([\n Distributions.Binomial(100, 0.5),\n Distributions.Geometric(1 / 20),\n Distributions.Poisson(20),\n ]);\n\njulia> N = 100;\n\njulia> Ω = [rand(distributions) for _ in 1:N];\n\njulia> P = fill(1 / N, N);\n\njulia> model = SDDP.LinearPolicyGraph(\n stages = 3,\n lower_bound = 0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, t\n @variable(subproblem, x, SDDP.State, initial_value = 0.0)\n SDDP.parameterize(subproblem, Ω, P) do ω\n JuMP.fix(x.out, ω[1])\n @stageobjective(subproblem, ω[2] * x.out + ω[3])\n println(\"ω is: \", ω)\n end\n end;\n\njulia> SDDP.simulate(model, 1);\nω is: [54, 38, 19]\nω is: [43, 3, 13]\nω is: [43, 4, 17]","category":"page"},{"location":"examples/booking_management/","page":"Booking management","title":"Booking management","text":"EditURL = \"booking_management.jl\"","category":"page"},{"location":"examples/booking_management/#Booking-management","page":"Booking management","title":"Booking management","text":"","category":"section"},{"location":"examples/booking_management/","page":"Booking management","title":"Booking management","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/booking_management/","page":"Booking management","title":"Booking management","text":"This example concerns the acceptance of booking requests for rooms in a hotel in the lead up to a large event.","category":"page"},{"location":"examples/booking_management/","page":"Booking management","title":"Booking management","text":"Each stage, we receive a booking request and can choose to accept or decline it. Once accepted, bookings cannot be terminated.","category":"page"},{"location":"examples/booking_management/","page":"Booking management","title":"Booking management","text":"using SDDP, HiGHS, Test\n\nfunction booking_management_model(num_days, num_rooms, num_requests)\n # maximum revenue that could be accrued.\n max_revenue = (num_rooms + num_requests) * num_days * num_rooms\n # booking_requests is a vector of {0,1} arrays of size\n # (num_days x num_rooms) if the room is requested.\n booking_requests = Array{Int,2}[]\n for room in 1:num_rooms\n for day in 1:num_days\n # note: length_of_stay is 0 indexed to avoid unnecessary +/- 1\n # on the indexing\n for length_of_stay in 0:(num_days-day)\n req = zeros(Int, (num_rooms, num_days))\n req[room:room, day.+(0:length_of_stay)] .= 1\n push!(booking_requests, req)\n end\n end\n end\n\n return model = SDDP.LinearPolicyGraph(;\n stages = num_requests,\n upper_bound = max_revenue,\n sense = :Max,\n optimizer = HiGHS.Optimizer,\n ) do sp, stage\n @variable(\n sp,\n 0 <= vacancy[room = 1:num_rooms, day = 1:num_days] <= 1,\n SDDP.State,\n Bin,\n initial_value = 1\n )\n @variables(\n sp,\n begin\n # Accept request for booking of room for length of time.\n 0 <= accept_request <= 1, Bin\n # Accept a booking for an individual room on an individual day.\n 0 <= room_request_accepted[1:num_rooms, 1:num_days] <= 1, Bin\n # Helper for JuMP.fix\n req[1:num_rooms, 1:num_days]\n end\n )\n for room in 1:num_rooms, day in 1:num_days\n @constraints(\n sp,\n begin\n # Update vacancy if we accept a room request\n vacancy[room, day].out ==\n vacancy[room, day].in - room_request_accepted[room, day]\n # Can't accept a request of a filled room\n room_request_accepted[room, day] <= vacancy[room, day].in\n # Can't accept invididual room request if entire request is declined\n room_request_accepted[room, day] <= accept_request\n # Can't accept request if room not requested\n room_request_accepted[room, day] <= req[room, day]\n # Accept all individual rooms is entire request is accepted\n room_request_accepted[room, day] + (1 - accept_request) >= req[room, day]\n end\n )\n end\n SDDP.parameterize(sp, booking_requests) do request\n return JuMP.fix.(req, request)\n end\n @stageobjective(\n sp,\n sum(\n (room + stage - 1) * room_request_accepted[room, day] for\n room in 1:num_rooms for day in 1:num_days\n )\n )\n end\nend\n\nfunction booking_management(duality_handler)\n m_1_2_5 = booking_management_model(1, 2, 5)\n SDDP.train(m_1_2_5; log_frequency = 5, duality_handler = duality_handler)\n if duality_handler == SDDP.ContinuousConicDuality()\n @test SDDP.calculate_bound(m_1_2_5) >= 7.25 - 1e-4\n else\n @test isapprox(SDDP.calculate_bound(m_1_2_5), 7.25, atol = 0.02)\n end\n\n m_2_2_3 = booking_management_model(2, 2, 3)\n SDDP.train(m_2_2_3; log_frequency = 10, duality_handler = duality_handler)\n if duality_handler == SDDP.ContinuousConicDuality()\n @test SDDP.calculate_bound(m_1_2_5) > 6.13\n else\n @test isapprox(SDDP.calculate_bound(m_2_2_3), 6.13, atol = 0.02)\n end\nend\n\nbooking_management(SDDP.ContinuousConicDuality())","category":"page"},{"location":"examples/booking_management/","page":"Booking management","title":"Booking management","text":"New version of HiGHS stalls booking_management(SDDP.LagrangianDuality())","category":"page"},{"location":"examples/no_strong_duality/","page":"No strong duality","title":"No strong duality","text":"EditURL = \"no_strong_duality.jl\"","category":"page"},{"location":"examples/no_strong_duality/#No-strong-duality","page":"No strong duality","title":"No strong duality","text":"","category":"section"},{"location":"examples/no_strong_duality/","page":"No strong duality","title":"No strong duality","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/no_strong_duality/","page":"No strong duality","title":"No strong duality","text":"This example is interesting, because strong duality doesn't hold for the extensive form (see if you can show why!), but we still converge.","category":"page"},{"location":"examples/no_strong_duality/","page":"No strong duality","title":"No strong duality","text":"using SDDP, HiGHS, Test\n\nfunction no_strong_duality()\n model = SDDP.PolicyGraph(\n SDDP.Graph(\n :root,\n [:node],\n [(:root => :node, 1.0), (:node => :node, 0.5)],\n );\n optimizer = HiGHS.Optimizer,\n lower_bound = 0.0,\n ) do sp, t\n @variable(sp, x, SDDP.State, initial_value = 1.0)\n @stageobjective(sp, x.out)\n @constraint(sp, x.in == x.out)\n end\n SDDP.train(model)\n @test SDDP.calculate_bound(model) ≈ 2.0 atol = 1e-5\n return\nend\n\nno_strong_duality()","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"CurrentModule = SDDP","category":"page"},{"location":"guides/add_integrality/#Integrality","page":"Integrality","title":"Integrality","text":"","category":"section"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"There's nothing special about binary and integer variables in SDDP.jl. Your models may contain a mix of binary, integer, or continuous state and control variables. Use the standard JuMP syntax to add binary or integer variables.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"For example:","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"using SDDP, HiGHS\nmodel = SDDP.LinearPolicyGraph(\n stages = 3,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(sp, 0 <= x <= 100, Int, SDDP.State, initial_value = 0)\n @variable(sp, 0 <= u <= 200, integer = true)\n @variable(sp, v >= 0)\n @constraint(sp, x.out == x.in + u + v - 150)\n @stageobjective(sp, 2u + 6v + x.out)\nend","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"If you want finer control over how SDDP.jl computes subgradients in the backward pass, you can pass an SDDP.AbstractDualityHandler to the duality_handler argument of SDDP.train.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"See Duality handlers for the list of handlers you can pass.","category":"page"},{"location":"guides/add_integrality/#Convergence","page":"Integrality","title":"Convergence","text":"","category":"section"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"SDDP.jl cannot guarantee that it will find a globally optimal policy when some of the variables are discrete. However, in most cases we find that it can still find an integer feasible policy that performs well in simulation.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"Moreover, when the number of nodes in the graph is large, or there is uncertainty, we are not aware of another algorithm that can claim to find a globally optimal policy.","category":"page"},{"location":"guides/add_integrality/#Does-SDDP.jl-implement-the-SDDiP-algorithm?","page":"Integrality","title":"Does SDDP.jl implement the SDDiP algorithm?","text":"","category":"section"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"Most discussions of SDDiP in the literature confuse two unrelated things.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"First, how to compute dual variables\nSecond, when the algorithm will converge to a globally optimal policy.","category":"page"},{"location":"guides/add_integrality/#Computing-dual-variables","page":"Integrality","title":"Computing dual variables","text":"","category":"section"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"The stochastic dual dynamic programming algorithm requires a subgradient of the objective with respect to the incoming state variable. ","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"One way to obtain a valid subgradient is to compute an optimal value of the dual variable lambda in the following subproblem:","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"beginaligned\nV_i(x omega) = minlimits_barx x^prime u C_i(barx u omega) + mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi)\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x quad lambda\nendaligned","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"The easiest option is to relax integrality of the discrete variables to form a linear program and then use linear programming duality to obtain the dual. But we could also use Lagrangian duality without needing to relax the integrality constraints.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"To compute the Lagrangian dual lambda, we penalize lambda^top(barx - x) in the objective instead of enforcing the constraint:","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"beginaligned\nmaxlimits_lambdaminlimits_barx x^prime u C_i(barx u omega) + mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi) - lambda^top(barx - x)\n x^prime = T_i(barx u omega) \n u in U_i(barx omega)\nendaligned","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"You can use Lagrangian duality in SDDP.jl by passing SDDP.LagrangianDuality to the duality_handler argument of SDDP.train.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"Compared with linear programming duality, the Lagrangian problem is difficult to solve because it requires the solution of many mixed-integer programs instead of a single linear program. This is one reason why \"SDDiP\" has poor performance.","category":"page"},{"location":"guides/add_integrality/#Convergence-2","page":"Integrality","title":"Convergence","text":"","category":"section"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"The second part to SDDiP is a very tightly scoped claim: if all of the state variables are binary and the algorithm uses Lagrangian duality to compute a subgradient, then it will converge to an optimal policy.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"In many cases, papers claim to \"do SDDiP,\" but they have state variables which are not binary. In these cases, the algorithm is not guaranteed to converge to a globally optimal policy.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"One work-around that has been suggested is to discretize the state variables into a set of binary state variables. However, this leads to a large number of binary state variables, which is another reason why \"SDDiP\" has poor performance.","category":"page"},{"location":"guides/add_integrality/","page":"Integrality","title":"Integrality","text":"In general, we recommend that you introduce integer variables into your model without fear of the consequences, and that you treat the resulting policy as a good heuristic, rather than an attempt to find a globally optimal policy.","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_2stages/","page":"StructDualDynProg: Problem 5.2, 2 stages","title":"StructDualDynProg: Problem 5.2, 2 stages","text":"EditURL = \"StructDualDynProg.jl_prob5.2_2stages.jl\"","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_2stages/#StructDualDynProg:-Problem-5.2,-2-stages","page":"StructDualDynProg: Problem 5.2, 2 stages","title":"StructDualDynProg: Problem 5.2, 2 stages","text":"","category":"section"},{"location":"examples/StructDualDynProg.jl_prob5.2_2stages/","page":"StructDualDynProg: Problem 5.2, 2 stages","title":"StructDualDynProg: Problem 5.2, 2 stages","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_2stages/","page":"StructDualDynProg: Problem 5.2, 2 stages","title":"StructDualDynProg: Problem 5.2, 2 stages","text":"This example comes from StochasticDualDynamicProgramming.jl","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_2stages/","page":"StructDualDynProg: Problem 5.2, 2 stages","title":"StructDualDynProg: Problem 5.2, 2 stages","text":"using SDDP, HiGHS, Test\n\nfunction test_prob52_2stages()\n model = SDDP.LinearPolicyGraph(;\n stages = 2,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, stage\n # ========== Problem data ==========\n n = 4\n m = 3\n i_c = [16, 5, 32, 2]\n C = [25, 80, 6.5, 160]\n T = [8760, 7000, 1500] / 8760\n D2 = [diff([0, 3919, 7329, 10315]) diff([0, 7086, 9004, 11169])]\n p2 = [0.9, 0.1]\n # ========== State Variables ==========\n @variable(subproblem, x[i = 1:n] >= 0, SDDP.State, initial_value = 0.0)\n # ========== Variables ==========\n @variables(subproblem, begin\n y[1:n, 1:m] >= 0\n v[1:n] >= 0\n penalty >= 0\n rhs_noise[1:m] # Dummy variable for RHS noise term.\n end)\n # ========== Constraints ==========\n @constraints(\n subproblem,\n begin\n [i = 1:n], x[i].out == x[i].in + v[i]\n [i = 1:n], sum(y[i, :]) <= x[i].in\n [j = 1:m], sum(y[:, j]) + penalty >= rhs_noise[j]\n end\n )\n if stage == 2\n # No investment in last stage.\n @constraint(subproblem, sum(v) == 0)\n end\n # ========== Uncertainty ==========\n if stage != 1 # no uncertainty in first stage\n SDDP.parameterize(subproblem, 1:size(D2, 2), p2) do ω\n for j in 1:m\n JuMP.fix(rhs_noise[j], D2[j, ω])\n end\n end\n end\n # ========== Stage objective ==========\n @stageobjective(subproblem, i_c' * v + C' * y * T + 1e6 * penalty)\n return\n end\n SDDP.train(model; log_frequency = 10)\n @test SDDP.calculate_bound(model) ≈ 340315.52 atol = 0.1\n return\nend\n\ntest_prob52_2stages()","category":"page"},{"location":"examples/stochastic_all_blacks/","page":"Stochastic All Blacks","title":"Stochastic All Blacks","text":"EditURL = \"stochastic_all_blacks.jl\"","category":"page"},{"location":"examples/stochastic_all_blacks/#Stochastic-All-Blacks","page":"Stochastic All Blacks","title":"Stochastic All Blacks","text":"","category":"section"},{"location":"examples/stochastic_all_blacks/","page":"Stochastic All Blacks","title":"Stochastic All Blacks","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/stochastic_all_blacks/","page":"Stochastic All Blacks","title":"Stochastic All Blacks","text":"using SDDP, HiGHS, Test\n\nfunction stochastic_all_blacks()\n # Number of time periods\n T = 3\n # Number of seats\n N = 2\n # R_ij = price of seat i at time j\n R = [3 3 6; 3 3 6]\n # Number of noises\n s = 3\n offers = [\n [[1, 1], [0, 0], [1, 1]],\n [[1, 0], [0, 0], [0, 0]],\n [[0, 1], [1, 0], [1, 1]],\n ]\n\n model = SDDP.LinearPolicyGraph(;\n stages = T,\n sense = :Max,\n upper_bound = 100.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, stage\n # Seat remaining?\n @variable(sp, 0 <= x[1:N] <= 1, SDDP.State, Bin, initial_value = 1)\n # Action: accept offer, or don't accept offer\n # We are allowed to accept some of the seats offered but not others\n @variable(sp, accept_offer[1:N], Bin)\n @variable(sp, offers_made[1:N])\n # Balance on seats\n @constraint(\n sp,\n balance[i in 1:N],\n x[i].in - x[i].out == accept_offer[i]\n )\n @stageobjective(sp, sum(R[i, stage] * accept_offer[i] for i in 1:N))\n SDDP.parameterize(sp, offers[stage]) do o\n return JuMP.fix.(offers_made, o)\n end\n @constraint(sp, accept_offer .<= offers_made)\n end\n\n SDDP.train(model; duality_handler = SDDP.LagrangianDuality())\n @test SDDP.calculate_bound(model) ≈ 8.0\n return\nend\n\nstochastic_all_blacks()","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"EditURL = \"example_milk_producer.jl\"","category":"page"},{"location":"tutorial/example_milk_producer/#Example:-the-milk-producer","page":"Example: the milk producer","title":"Example: the milk producer","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The purpose of this tutorial is to demonstrate how to fit a Markovian policy graph to a univariate stochastic process.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"This tutorial uses the following packages:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"using SDDP\nimport HiGHS\nimport Plots","category":"page"},{"location":"tutorial/example_milk_producer/#Background","page":"Example: the milk producer","title":"Background","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"A company produces milk for sale on a spot market each month. The quantity of milk they produce is uncertain, and so too is the price on the spot market. The company can store unsold milk in a stockpile of dried milk powder.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The spot price is determined by an auction system, and so varies from month to month, but demonstrates serial correlation. In each auction, there is sufficient demand that the milk producer finds a buyer for all their milk, regardless of the quantity they supply. Furthermore, the spot price is independent of the milk producer (they are a small player in the market).","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The spot price is highly volatile, and is the result of a process that is out of the control of the company. To counteract their price risk, the company engages in a forward contracting programme.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The forward contracting programme is a deal for physical milk four months in the future.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The futures price is the current spot price, plus some forward contango (the buyers gain certainty that they will receive the milk in the future).","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"In general, the milk company should forward contract (since they reduce their price risk), however they also have production risk. Therefore, it may be the case that they forward contract a fixed amount, but find that they do not produce enough milk to meet the fixed demand. They are then forced to buy additional milk on the spot market.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The goal of the milk company is to choose the extent to which they forward contract in order to maximise (risk-adjusted) revenues, whilst managing their production risk.","category":"page"},{"location":"tutorial/example_milk_producer/#A-stochastic-process-for-price","page":"Example: the milk producer","title":"A stochastic process for price","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"It is outside the scope of this tutorial, but assume that we have gone away and analysed historical data to fit a stochastic process to the sequence of monthly auction spot prices.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"One plausible model is a multiplicative auto-regressive model of order one, where the white noise term is modeled by a finite distribution of empirical residuals. We can simulate this stochastic process as follows:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"function simulator()\n residuals = [0.0987, 0.199, 0.303, 0.412, 0.530, 0.661, 0.814, 1.010, 1.290]\n residuals = 0.1 * vcat(-residuals, 0.0, residuals)\n scenario = zeros(12)\n y, μ, α = 4.5, 6.0, 0.05\n for t in 1:12\n y = exp((1 - α) * log(y) + α * log(μ) + rand(residuals))\n scenario[t] = clamp(y, 3.0, 9.0)\n end\n return scenario\nend\n\nsimulator()","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"It may be helpful to visualize a number of simulations of the price process:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"plot = Plots.plot(\n [simulator() for _ in 1:500];\n color = \"gray\",\n opacity = 0.2,\n legend = false,\n xlabel = \"Month\",\n ylabel = \"Price [\\$/kg]\",\n xlims = (1, 12),\n ylims = (3, 9),\n)","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The prices gradually revert to the mean of $6/kg, and there is high volatility.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"We can't incorporate this price process directly into SDDP.jl, but we can fit a SDDP.MarkovianGraph directly from the simulator:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"graph = SDDP.MarkovianGraph(simulator; budget = 30, scenarios = 10_000);\nnothing # hide","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"Here budget is the number of nodes in the policy graph, and scenarios is the number of simulations to use when estimating the transition probabilities.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"The graph contains too many nodes to be show, but we can plot it:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"for ((t, price), edges) in graph.nodes\n for ((t′, price′), probability) in edges\n Plots.plot!(\n plot,\n [t, t′],\n [price, price′];\n color = \"red\",\n width = 3 * probability,\n )\n end\nend\n\nplot","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"That looks okay. Try changing budget and scenarios to see how different Markovian policy graphs can be created.","category":"page"},{"location":"tutorial/example_milk_producer/#Model","page":"Example: the milk producer","title":"Model","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"Now that we have a Markovian graph, we can build the model. See if you can work out how we arrived at this formulation by reading the background description. Do all the variables and constraints make sense?","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"model = SDDP.PolicyGraph(\n graph;\n sense = :Max,\n upper_bound = 1e2,\n optimizer = HiGHS.Optimizer,\n) do sp, node\n # Decompose the node into the month (::Int) and spot price (::Float64)\n t, price = node::Tuple{Int,Float64}\n # Transactions on the futures market cost 0.01\n c_transaction = 0.01\n # It costs the company +50% to buy milk on the spot market and deliver to\n # their customers\n c_buy_premium = 1.5\n # Buyer is willing to pay +5% for certainty\n c_contango = 1.05\n # Distribution of production\n Ω_production = range(0.1, 0.2; length = 5)\n c_max_production = 12 * maximum(Ω_production)\n # x_stock: quantity of milk in stock pile\n @variable(sp, 0 <= x_stock, SDDP.State, initial_value = 0)\n # x_forward[i]: quantity of milk for delivery in i months\n @variable(sp, 0 <= x_forward[1:4], SDDP.State, initial_value = 0)\n # u_spot_sell: quantity of milk to sell on spot market\n @variable(sp, 0 <= u_spot_sell <= c_max_production)\n # u_spot_buy: quantity of milk to buy on spot market\n @variable(sp, 0 <= u_spot_buy <= c_max_production)\n # u_spot_buy: quantity of milk to sell on futures market\n c_max_futures = t <= 8 ? c_max_production : 0.0\n @variable(sp, 0 <= u_forward_sell <= c_max_futures)\n # ω_production: production random variable\n @variable(sp, ω_production)\n # Forward contracting constraints:\n @constraint(sp, [i in 1:3], x_forward[i].out == x_forward[i+1].in)\n @constraint(sp, x_forward[4].out == u_forward_sell)\n # Stockpile balance constraint\n @constraint(\n sp,\n x_stock.out ==\n x_stock.in + ω_production + u_spot_buy - x_forward[1].in - u_spot_sell\n )\n # The random variables. `price` comes from the Markov node\n #\n # !!! warning\n # The elements in Ω MUST be a tuple with 1 or 2 values, where the first\n # value is `price` and the second value is the random variable for the\n # current node. If the node is deterministic, use Ω = [(price,)].\n Ω = [(price, p) for p in Ω_production]\n SDDP.parameterize(sp, Ω) do ω\n # Fix the ω_production variable\n fix(ω_production, ω[2])\n @stageobjective(\n sp,\n # Sales on spot market\n ω[1] * (u_spot_sell - c_buy_premium * u_spot_buy) +\n # Sales on futures smarket\n (ω[1] * c_contango - c_transaction) * u_forward_sell\n )\n return\n end\n return\nend","category":"page"},{"location":"tutorial/example_milk_producer/#Training-a-policy","page":"Example: the milk producer","title":"Training a policy","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"Now we have a model, we train a policy. The SDDP.SimulatorSamplingScheme is used in the forward pass. It generates an out-of-sample sequence of prices using simulator and traverses the closest sequence of nodes in the policy graph. When calling SDDP.parameterize for each subproblem, it uses the new out-of-sample price instead of the price associated with the Markov node.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"SDDP.train(\n model;\n time_limit = 20,\n risk_measure = 0.5 * SDDP.Expectation() + 0.5 * SDDP.AVaR(0.25),\n sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),\n)","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"warning: Warning\nWe're intentionally terminating the training early so that the documentation doesn't take too long to build. If you run this example locally, increase the time limit.","category":"page"},{"location":"tutorial/example_milk_producer/#Simulating-the-policy","page":"Example: the milk producer","title":"Simulating the policy","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"When simulating the policy, we can also use the SDDP.SimulatorSamplingScheme.","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"simulations = SDDP.simulate(\n model,\n 200,\n Symbol[:x_stock, :u_forward_sell, :u_spot_sell, :u_spot_buy];\n sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),\n);\nnothing # hide","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"To show how the sampling scheme uses the new out-of-sample price instead of the price associated with the Markov node, compare the index of the Markov state visited in stage 12 of the first simulation:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"simulations[1][12][:node_index]","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"to the realization of the noise (price, ω) passed to SDDP.parameterize:","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"simulations[1][12][:noise_term]","category":"page"},{"location":"tutorial/example_milk_producer/#Visualizing-the-policy","page":"Example: the milk producer","title":"Visualizing the policy","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"Finally, we can plot the policy to gain insight (although note that we terminated the training early, so we should run the re-train the policy for more iterations before making too many judgements).","category":"page"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"plot = Plots.plot(\n SDDP.publication_plot(simulations; title = \"x_stock.out\") do data\n return data[:x_stock].out\n end,\n SDDP.publication_plot(simulations; title = \"u_forward_sell\") do data\n return data[:u_forward_sell]\n end,\n SDDP.publication_plot(simulations; title = \"u_spot_buy\") do data\n return data[:u_spot_buy]\n end,\n SDDP.publication_plot(simulations; title = \"u_spot_sell\") do data\n return data[:u_spot_sell]\n end;\n layout = (2, 2),\n)","category":"page"},{"location":"tutorial/example_milk_producer/#Next-steps","page":"Example: the milk producer","title":"Next steps","text":"","category":"section"},{"location":"tutorial/example_milk_producer/","page":"Example: the milk producer","title":"Example: the milk producer","text":"Train the policy for longer. What do you observe?\nTry creating different Markovian graphs. What happens if you add more nodes?\nTry different risk measures","category":"page"},{"location":"examples/FAST_production_management/","page":"FAST: the production management problem","title":"FAST: the production management problem","text":"EditURL = \"FAST_production_management.jl\"","category":"page"},{"location":"examples/FAST_production_management/#FAST:-the-production-management-problem","page":"FAST: the production management problem","title":"FAST: the production management problem","text":"","category":"section"},{"location":"examples/FAST_production_management/","page":"FAST: the production management problem","title":"FAST: the production management problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/FAST_production_management/","page":"FAST: the production management problem","title":"FAST: the production management problem","text":"An implementation of the Production Management example from FAST","category":"page"},{"location":"examples/FAST_production_management/","page":"FAST: the production management problem","title":"FAST: the production management problem","text":"using SDDP, HiGHS, Test\n\nfunction fast_production_management(; cut_type)\n DEMAND = [2, 10]\n H = 3\n N = 2\n C = [0.2, 0.7]\n S = 2 .+ [0.33, 0.54]\n model = SDDP.LinearPolicyGraph(;\n stages = H,\n lower_bound = -50.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n @variable(sp, x[1:N] >= 0, SDDP.State, initial_value = 0.0)\n @variables(sp, begin\n s[i = 1:N] >= 0\n d\n end)\n @constraints(sp, begin\n [i = 1:N], s[i] <= x[i].in\n sum(s) <= d\n end)\n SDDP.parameterize(sp, t == 1 ? [0] : DEMAND) do ω\n return JuMP.fix(d, ω)\n end\n @stageobjective(sp, sum(C[i] * x[i].out for i in 1:N) - S's)\n end\n SDDP.train(model; cut_type = cut_type, print_level = 2, log_frequency = 5)\n @test SDDP.calculate_bound(model) ≈ -23.96 atol = 1e-2\nend\n\nfast_production_management(; cut_type = SDDP.SINGLE_CUT)\nfast_production_management(; cut_type = SDDP.MULTI_CUT)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"EditURL = \"example_reservoir.jl\"","category":"page"},{"location":"tutorial/example_reservoir/#Example:-deterministic-to-stochastic","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"The purpose of this tutorial is to explain how we can go from a deterministic time-staged optimal control model in JuMP to a multistage stochastic optimization model in SDDP.jl. As a motivating problem, we consider the hydro-thermal problem with a single reservoir.","category":"page"},{"location":"tutorial/example_reservoir/#Packages","page":"Example: deterministic to stochastic","title":"Packages","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"This tutorial requires the following packages:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"using JuMP\nusing SDDP\nimport CSV\nimport DataFrames\nimport HiGHS\nimport Plots","category":"page"},{"location":"tutorial/example_reservoir/#Data","page":"Example: deterministic to stochastic","title":"Data","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"First, we need some data for the problem. For this tutorial, we'll write CSV files to a temporary directory from Julia. If you have an existing file, you could change the filename to point to that instead.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"dir = mktempdir()\nfilename = joinpath(dir, \"example_reservoir.csv\")","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Here is the data","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"csv_data = \"\"\"\nweek,inflow,demand,cost\n1,3,7,10.2\\n2,2,7.1,10.4\\n3,3,7.2,10.6\\n4,2,7.3,10.9\\n5,3,7.4,11.2\\n\n6,2,7.6,11.5\\n7,3,7.8,11.9\\n8,2,8.1,12.3\\n9,3,8.3,12.7\\n10,2,8.6,13.1\\n\n11,3,8.9,13.6\\n12,2,9.2,14\\n13,3,9.5,14.5\\n14,2,9.8,14.9\\n15,3,10.1,15.3\\n\n16,2,10.4,15.8\\n17,3,10.7,16.2\\n18,2,10.9,16.6\\n19,3,11.2,17\\n20,3,11.4,17.4\\n\n21,3,11.6,17.7\\n22,2,11.7,18\\n23,3,11.8,18.3\\n24,2,11.9,18.5\\n25,3,12,18.7\\n\n26,2,12,18.9\\n27,3,12,19\\n28,2,11.9,19.1\\n29,3,11.8,19.2\\n30,2,11.7,19.2\\n\n31,3,11.6,19.2\\n32,2,11.4,19.2\\n33,3,11.2,19.1\\n34,2,10.9,19\\n35,3,10.7,18.9\\n\n36,2,10.4,18.8\\n37,3,10.1,18.6\\n38,2,9.8,18.5\\n39,3,9.5,18.4\\n40,3,9.2,18.2\\n\n41,2,8.9,18.1\\n42,3,8.6,17.9\\n43,2,8.3,17.8\\n44,3,8.1,17.7\\n45,2,7.8,17.6\\n\n46,3,7.6,17.5\\n47,2,7.4,17.5\\n48,3,7.3,17.5\\n49,2,7.2,17.5\\n50,3,7.1,17.6\\n\n51,3,7,17.7\\n52,3,7,17.8\\n\n\"\"\"\nwrite(filename, csv_data);\nnothing #hide","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"And here we read it into a DataFrame:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"data = CSV.read(filename, DataFrames.DataFrame)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"It's easier to visualize the data if we plot it:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Plots.plot(\n Plots.plot(data[!, :inflow]; ylabel = \"Inflow\"),\n Plots.plot(data[!, :demand]; ylabel = \"Demand\"),\n Plots.plot(data[!, :cost]; ylabel = \"Cost\", xlabel = \"Week\");\n layout = (3, 1),\n legend = false,\n)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"The number of weeks will be useful later:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"T = size(data, 1)","category":"page"},{"location":"tutorial/example_reservoir/#Deterministic-JuMP-model","page":"Example: deterministic to stochastic","title":"Deterministic JuMP model","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"To start, we construct a deterministic model in pure JuMP.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Create a JuMP model, using HiGHS as the optimizer:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"model = Model(HiGHS.Optimizer)\nset_silent(model)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"x_storage[t]: the amount of water in the reservoir at the start of stage t:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"reservoir_max = 320.0\n@variable(model, 0 <= x_storage[1:T+1] <= reservoir_max)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"We need an initial condition for x_storage[1]. Fix it to 300 units:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"reservoir_initial = 300\nfix(x_storage[1], reservoir_initial; force = true)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"u_flow[t]: the amount of water to flow through the turbine in stage t:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"flow_max = 12\n@variable(model, 0 <= u_flow[1:T] <= flow_max)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"u_spill[t]: the amount of water to spill from the reservoir in stage t, bypassing the turbine:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"@variable(model, 0 <= u_spill[1:T])","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"u_thermal[t]: the amount of thermal generation in stage t:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"@variable(model, 0 <= u_thermal[1:T])","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"ω_inflow[t]: the amount of inflow to the reservoir in stage t:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"@variable(model, ω_inflow[1:T])","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"For this model, our inflow is fixed, so we fix it to the data we have:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"for t in 1:T\n fix(ω_inflow[t], data[t, :inflow])\nend","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"The water balance constraint says that the water in the reservoir at the start of stage t+1 is the water in the reservoir at the start of stage t, less the amount flowed through the turbine, u_flow[t], less the amount spilled, u_spill[t], plus the amount of inflow, ω_inflow[t], into the reservoir:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"@constraint(\n model,\n [t in 1:T],\n x_storage[t+1] == x_storage[t] - u_flow[t] - u_spill[t] + ω_inflow[t],\n)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"We also need a supply = demand constraint. In practice, the units of this would be in MWh, and there would be a conversion factor between the amount of water flowing through the turbine and the power output. To simplify, we assume that power and water have the same units, so that one \"unit\" of demand is equal to one \"unit\" of the reservoir x_storage[t]:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"@constraint(model, [t in 1:T], u_flow[t] + u_thermal[t] == data[t, :demand])","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Our objective is to minimize the cost of thermal generation:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"@objective(model, Min, sum(data[t, :cost] * u_thermal[t] for t in 1:T))","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Let's optimize and check the solution","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"optimize!(model)\nsolution_summary(model)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"The total cost is:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"objective_value(model)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Here's a plot of demand and generation:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Plots.plot(data[!, :demand]; label = \"Demand\", xlabel = \"Week\")\nPlots.plot!(value.(u_thermal); label = \"Thermal\")\nPlots.plot!(value.(u_flow); label = \"Hydro\")","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"And here's the storage over time:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Plots.plot(value.(x_storage); label = \"Storage\", xlabel = \"Week\")","category":"page"},{"location":"tutorial/example_reservoir/#Deterministic-SDDP-model","page":"Example: deterministic to stochastic","title":"Deterministic SDDP model","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"For the next step, we show how to decompose our JuMP model into SDDP.jl. It should obtain the same solution.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"model = SDDP.LinearPolicyGraph(;\n stages = T,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(\n sp,\n 0 <= x_storage <= reservoir_max,\n SDDP.State,\n initial_value = reservoir_initial,\n )\n @variable(sp, 0 <= u_flow <= flow_max)\n @variable(sp, 0 <= u_thermal)\n @variable(sp, 0 <= u_spill)\n @variable(sp, ω_inflow)\n fix(ω_inflow, data[t, :inflow])\n @constraint(sp, x_storage.out == x_storage.in - u_flow - u_spill + ω_inflow)\n @constraint(sp, u_flow + u_thermal == data[t, :demand])\n @stageobjective(sp, data[t, :cost] * u_thermal)\n return\nend","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Can you see how the JuMP model maps to this syntax? We have created a SDDP.LinearPolicyGraph with T stages, we're minimizing, and we're using HiGHS.Optimizer as the optimizer.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"A few bits might be non-obvious:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"We need to provide a lower bound for the objective function. Since our costs are always positive, a valid lower bound for the total cost is 0.0.\nWe define x_storage as a state variable using SDDP.State. A state variable is any variable that flows through time, and for which we need to know the value of it in stage t-1 to compute the best action in stage t. The state variable x_storage is actually two decision variables, x_storage.in and x_storage.out, which represent x_storage[t] and x_storage[t+1] respectively.\nWe need to use @stageobjective instead of @objective.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Instead of calling JuMP.optimize!, SDDP.jl uses a train method. With our machine learning hat on, you can think of SDDP.jl as training a function for each stage that accepts the current reservoir state as input and returns the optimal actions as output. It is also an iterative algorithm, so we need to specify when it should terminate:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"SDDP.train(model; iteration_limit = 10)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"As a quick sanity check, did we get the same cost as our JuMP model?","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"SDDP.calculate_bound(model)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"That's good. Next, to check the value of the decision variables. This isn't as straight forward as our JuMP model. Instead, we need to simulate the policy, and then extract the values of the decision variables from the results of the simulation.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Since our model is deterministic, we need only 1 replication of the simulation, and we want to record the values of the x_storage, u_flow, and u_thermal variables:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"simulations = SDDP.simulate(\n model,\n 1, # Number of replications\n [:x_storage, :u_flow, :u_thermal],\n);\nnothing #hide","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"The simulations vector is too big to show. But it contains one element for each replication, and each replication contains one dictionary for each stage.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"For example, the data corresponding to the tenth stage in the first replication is:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"simulations[1][10]","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Let's grab the trace of the u_thermal and u_flow variables in the first replication, and then plot them:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"r_sim = [sim[:u_thermal] for sim in simulations[1]]\nu_sim = [sim[:u_flow] for sim in simulations[1]]\n\nPlots.plot(data[!, :demand]; label = \"Demand\", xlabel = \"Week\")\nPlots.plot!(r_sim; label = \"Thermal\")\nPlots.plot!(u_sim; label = \"Hydro\")","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Perfect. That's the same as we got before.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Now let's look at x_storage. This is a little more complicated, because we need to grab the outgoing value of the state variable in each stage:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"x_sim = [sim[:x_storage].out for sim in simulations[1]]\n\nPlots.plot(x_sim; label = \"Storage\", xlabel = \"Week\")","category":"page"},{"location":"tutorial/example_reservoir/#Stochastic-SDDP-model","page":"Example: deterministic to stochastic","title":"Stochastic SDDP model","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Now we add some randomness to our model. In each stage, we assume that the inflow could be: 2 units lower, with 30% probability; the same as before, with 40% probability; or 5 units higher, with 30% probability.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"model = SDDP.LinearPolicyGraph(;\n stages = T,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(\n sp,\n 0 <= x_storage <= reservoir_max,\n SDDP.State,\n initial_value = reservoir_initial,\n )\n @variable(sp, 0 <= u_flow <= flow_max)\n @variable(sp, 0 <= u_thermal)\n @variable(sp, 0 <= u_spill)\n @variable(sp, ω_inflow)\n # <--- This bit is new\n Ω, P = [-2, 0, 5], [0.3, 0.4, 0.3]\n SDDP.parameterize(sp, Ω, P) do ω\n fix(ω_inflow, data[t, :inflow] + ω)\n return\n end\n # --->\n @constraint(sp, x_storage.out == x_storage.in - u_flow - u_spill + ω_inflow)\n @constraint(sp, u_flow + u_thermal == data[t, :demand])\n @stageobjective(sp, data[t, :cost] * u_thermal)\n return\nend","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Can you see the differences?","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Let's train our new model. We need more iterations because of the stochasticity:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"SDDP.train(model; iteration_limit = 100)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Now simulate the policy. This time we do 100 replications because the policy is now stochastic instead of deterministic:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"simulations =\n SDDP.simulate(model, 100, [:x_storage, :u_flow, :u_thermal, :ω_inflow]);\nnothing #hide","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"And let's plot the use of thermal generation in each replication:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"plot = Plots.plot(data[!, :demand]; label = \"Demand\", xlabel = \"Week\")\nfor simulation in simulations\n Plots.plot!(plot, [sim[:u_thermal] for sim in simulation]; label = \"\")\nend\nplot","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Viewing an interpreting static plots like this is difficult, particularly as the number of simulations grows. SDDP.jl includes an interactive SpaghettiPlot that makes things easier:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"plot = SDDP.SpaghettiPlot(simulations)\nSDDP.add_spaghetti(plot; title = \"Storage\") do sim\n return sim[:x_storage].out\nend\nSDDP.add_spaghetti(plot; title = \"Hydro\") do sim\n return sim[:u_flow]\nend\nSDDP.add_spaghetti(plot; title = \"Inflow\") do sim\n return sim[:ω_inflow]\nend\nSDDP.plot(\n plot,\n \"spaghetti_plot.html\";\n # We need this to build the documentation. Set to true if running locally.\n open = false,\n)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"info: Info\nIf you have trouble viewing the plot, you can open it in a new window.","category":"page"},{"location":"tutorial/example_reservoir/#Cyclic-graphs","page":"Example: deterministic to stochastic","title":"Cyclic graphs","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"One major problem with our model is that the reservoir is empty at the end of the time horizon. This is because our model does not consider the cost of future years after the T weeks.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"We can fix this using a cyclic policy graph. One way to construct a graph is with the SDDP.UnicyclicGraph constructor:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"SDDP.UnicyclicGraph(0.7; num_nodes = 2)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"This graph has two nodes, and a loop from node 2 back to node 1 with probability 0.7.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"We can construct a cyclic policy graph as follows:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"graph = SDDP.UnicyclicGraph(0.95; num_nodes = T)\nmodel = SDDP.PolicyGraph(\n graph;\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(\n sp,\n 0 <= x_storage <= reservoir_max,\n SDDP.State,\n initial_value = reservoir_initial,\n )\n @variable(sp, 0 <= u_flow <= flow_max)\n @variable(sp, 0 <= u_thermal)\n @variable(sp, 0 <= u_spill)\n @variable(sp, ω_inflow)\n Ω, P = [-2, 0, 5], [0.3, 0.4, 0.3]\n SDDP.parameterize(sp, Ω, P) do ω\n fix(ω_inflow, data[t, :inflow] + ω)\n return\n end\n @constraint(sp, x_storage.out == x_storage.in - u_flow - u_spill + ω_inflow)\n @constraint(sp, u_flow + u_thermal == data[t, :demand])\n @stageobjective(sp, data[t, :cost] * u_thermal)\n return\nend","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Notice how the only thing that has changed is our graph; the subproblems remain the same.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Let's train a policy:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"SDDP.train(model; iteration_limit = 100)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"When we simulate now, each trajectory will be a different length, because each cycle has a 95% probability of continuing and a 5% probability of stopping.","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"simulations = SDDP.simulate(model, 3);\nlength.(simulations)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"We can simulate a fixed number of cycles by passing a sampling_scheme:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"simulations = SDDP.simulate(\n model,\n 100,\n [:x_storage, :u_flow];\n sampling_scheme = SDDP.InSampleMonteCarlo(;\n max_depth = 5 * T,\n terminate_on_dummy_leaf = false,\n ),\n);\nlength.(simulations)","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Let's visualize the policy:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Plots.plot(\n SDDP.publication_plot(simulations; ylabel = \"Storage\") do sim\n return sim[:x_storage].out\n end,\n SDDP.publication_plot(simulations; ylabel = \"Hydro\") do sim\n return sim[:u_flow]\n end;\n layout = (2, 1),\n)","category":"page"},{"location":"tutorial/example_reservoir/#Next-steps","page":"Example: deterministic to stochastic","title":"Next steps","text":"","category":"section"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Our model is very basic. There are many aspects that we could improve:","category":"page"},{"location":"tutorial/example_reservoir/","page":"Example: deterministic to stochastic","title":"Example: deterministic to stochastic","text":"Can you add a second reservoir to make a river chain?\nCan you modify the problem and data to use proper units, including a conversion between the volume of water flowing through the turbine and the electrical power output?","category":"page"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"CurrentModule = SDDP","category":"page"},{"location":"changelog/#Release-notes","page":"Release notes","title":"Release notes","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.","category":"page"},{"location":"changelog/#v1.9.0-(October-17,-2024)","page":"Release notes","title":"v1.9.0 (October 17, 2024)","text":"","category":"section"},{"location":"changelog/#Added","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added write_only_selected_cuts and cut_selection keyword arguments to write_cuts_to_file and read_cuts_from_file to skip potentially expensive operations (#781) (#784)\nAdded set_numerical_difficulty_callback to modify the subproblem on numerical difficulty (#790)","category":"page"},{"location":"changelog/#Fixed","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed the tests to skip threading tests if running in serial (#770)\nFixed BanditDuality to handle the case where the standard deviation is NaN (#779)\nFixed an error when lagged state variables are encountered in MSPFormat (#786)\nFixed publication_plot with replications of different lengths (#788)\nFixed CTRL+C interrupting the code at unsafe points (#789)","category":"page"},{"location":"changelog/#Other","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#771) (#772)\nUpdated printing because of changes in JuMP (#773)","category":"page"},{"location":"changelog/#v1.8.1-(August-5,-2024)","page":"Release notes","title":"v1.8.1 (August 5, 2024)","text":"","category":"section"},{"location":"changelog/#Fixed-2","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed various issues with SDDP.Threaded() (#761)\nFixed a deprecation warning for sorting a dictionary (#763)","category":"page"},{"location":"changelog/#Other-2","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated copyright notices (#762)\nUpdated .JuliaFormatter.toml (#764)","category":"page"},{"location":"changelog/#v1.8.0-(July-24,-2024)","page":"Release notes","title":"v1.8.0 (July 24, 2024)","text":"","category":"section"},{"location":"changelog/#Added-2","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.Threaded(), which is an experimental parallel scheme that supports solving problems using multiple threads. Some parts of SDDP.jl may not be thread-safe, and this can cause incorrect results, segfaults, or other errors. Please use with care and report any issues by opening a GitHub issue. (#758)","category":"page"},{"location":"changelog/#Other-3","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements and fixes (#747) (#759)","category":"page"},{"location":"changelog/#v1.7.0-(June-4,-2024)","page":"Release notes","title":"v1.7.0 (June 4, 2024)","text":"","category":"section"},{"location":"changelog/#Added-3","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added sample_backward_noise_terms_with_state for creating backward pass sampling schemes that depend on the current primal state. (#742) (Thanks @arthur-brigatto)","category":"page"},{"location":"changelog/#Fixed-3","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed error message when publication_plot has non-finite data (#738)","category":"page"},{"location":"changelog/#Other-4","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated the logo constructor (#730)","category":"page"},{"location":"changelog/#v1.6.7-(February-1,-2024)","page":"Release notes","title":"v1.6.7 (February 1, 2024)","text":"","category":"section"},{"location":"changelog/#Fixed-4","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed non-constant state dimension in the MSPFormat reader (#695)\nFixed SimulatorSamplingScheme for deterministic nodes (#710)\nFixed line search in BFGS (#711)\nFixed handling of NEARLY_FEASIBLE_POINT status (#726)","category":"page"},{"location":"changelog/#Other-5","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#692) (#694) (#706) (#716) (#727)\nUpdated to StochOptFormat v1.0 (#705)\nAdded an experimental OuterApproximation algorithm (#709)\nUpdated .gitignore (#717)\nAdded code for MDP paper (#720) (#721)\nAdded Google analytics (#723)","category":"page"},{"location":"changelog/#v1.6.6-(September-29,-2023)","page":"Release notes","title":"v1.6.6 (September 29, 2023)","text":"","category":"section"},{"location":"changelog/#Other-6","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated Example: two-stage newsvendor tutorial (#689)\nAdded a warning for people using SDDP.Statistical (#687)","category":"page"},{"location":"changelog/#v1.6.5-(September-25,-2023)","page":"Release notes","title":"v1.6.5 (September 25, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-5","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed duplicate nodes in MarkovianGraph (#681)","category":"page"},{"location":"changelog/#Other-7","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated tutorials (#677) (#678) (#682) (#683)\nFixed documentation preview (#679)","category":"page"},{"location":"changelog/#v1.6.4-(September-23,-2023)","page":"Release notes","title":"v1.6.4 (September 23, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-6","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed error for invalid log_frequency values (#665)\nFixed objective sense in deterministic_equivalent (#673)","category":"page"},{"location":"changelog/#Other-8","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation updates (#658) (#666) (#671)\nSwitch to GitHub action for deploying docs (#668) (#670)\nUpdate to Documenter@1 (#669)","category":"page"},{"location":"changelog/#v1.6.3-(September-8,-2023)","page":"Release notes","title":"v1.6.3 (September 8, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-7","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed default stopping rule with iteration_limit or time_limit set (#662)","category":"page"},{"location":"changelog/#Other-9","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Various documentation improvements (#651) (#657) (#659) (#660)","category":"page"},{"location":"changelog/#v1.6.2-(August-24,-2023)","page":"Release notes","title":"v1.6.2 (August 24, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-8","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"MSPFormat now detect and exploit stagewise independent lattices (#653)\nFixed set_optimizer for models read from file (#654)","category":"page"},{"location":"changelog/#Other-10","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed typo in pglib_opf.jl (#647)\nFixed documentation build and added color (#652)","category":"page"},{"location":"changelog/#v1.6.1-(July-20,-2023)","page":"Release notes","title":"v1.6.1 (July 20, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-9","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed bugs in MSPFormat reader (#638) (#639)","category":"page"},{"location":"changelog/#Other-11","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Clarified OutOfSampleMonteCarlo docstring (#643)","category":"page"},{"location":"changelog/#v1.6.0-(July-3,-2023)","page":"Release notes","title":"v1.6.0 (July 3, 2023)","text":"","category":"section"},{"location":"changelog/#Added-4","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added RegularizedForwardPass (#624)\nAdded FirstStageStoppingRule (#634)","category":"page"},{"location":"changelog/#Other-12","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Removed an unbound type parameter (#632)\nFixed typo in docstring (#633)\nAdded Here-and-now and hazard-decision tutorial (#635)","category":"page"},{"location":"changelog/#v1.5.1-(June-30,-2023)","page":"Release notes","title":"v1.5.1 (June 30, 2023)","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"This release contains a number of minor code changes, but it has a large impact on the content that is printed to screen. In particular, we now log periodically, instead of each iteration, and a \"good\" stopping rule is used as the default if none are specified. Try using SDDP.train(model) to see the difference.","category":"page"},{"location":"changelog/#Other-13","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed various typos in the documentation (#617)\nFixed printing test after changes in JuMP (#618)\nSet SimulationStoppingRule as the default stopping rule (#619)\nChanged the default logging frequency. Pass log_every_seconds = 0.0 to train to revert to the old behavior. (#620)\nAdded example usage with Distributions.jl (@slwu89) (#622)\nRemoved the numerical issue @warn (#627)\nImproved the quality of docstrings (#630)","category":"page"},{"location":"changelog/#v1.5.0-(May-14,-2023)","page":"Release notes","title":"v1.5.0 (May 14, 2023)","text":"","category":"section"},{"location":"changelog/#Added-5","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added the ability to use a different model for the forward pass. This is a novel feature that lets you train better policies when the model is non-convex or does not have a well-defined dual. See the Alternative forward models tutorial in which we train convex and non-convex formulations of the optimal power flow problem. (#611)","category":"page"},{"location":"changelog/#Other-14","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated missing changelog entries (#608)\nRemoved global variables (#610)\nConverted the Options struct to keyword arguments. This struct was a private implementation detail, but the change is breaking if you developed an extension to SDDP that touched these internals. (#612)\nFixed some typos (#613)","category":"page"},{"location":"changelog/#v1.4.0-(May-8,-2023)","page":"Release notes","title":"v1.4.0 (May 8, 2023)","text":"","category":"section"},{"location":"changelog/#Added-6","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.SimulationStoppingRule (#598)\nAdded sampling_scheme argument to SDDP.write_to_file (#607)","category":"page"},{"location":"changelog/#Fixed-10","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed parsing of some MSPFormat files (#602) (#604)\nFixed printing in header (#605)","category":"page"},{"location":"changelog/#v1.3.0-(May-3,-2023)","page":"Release notes","title":"v1.3.0 (May 3, 2023)","text":"","category":"section"},{"location":"changelog/#Added-7","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added experimental support for SDDP.MSPFormat.read_from_file (#593)","category":"page"},{"location":"changelog/#Other-15","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated to StochOptFormat v0.3 (#600)","category":"page"},{"location":"changelog/#v1.2.1-(May-1,-2023)","page":"Release notes","title":"v1.2.1 (May 1, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-11","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed log_every_seconds (#597)","category":"page"},{"location":"changelog/#v1.2.0-(May-1,-2023)","page":"Release notes","title":"v1.2.0 (May 1, 2023)","text":"","category":"section"},{"location":"changelog/#Added-8","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.SimulatorSamplingScheme (#594)\nAdded log_every_seconds argument to SDDP.train (#595)","category":"page"},{"location":"changelog/#Other-16","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Tweaked how the log is printed (#588)\nUpdated to StochOptFormat v0.2 (#592)","category":"page"},{"location":"changelog/#v1.1.4-(April-10,-2023)","page":"Release notes","title":"v1.1.4 (April 10, 2023)","text":"","category":"section"},{"location":"changelog/#Fixed-12","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Logs are now flushed every iteration (#584)","category":"page"},{"location":"changelog/#Other-17","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added docstrings to various functions (#581)\nMinor documentation updates (#580)\nClarified integrality documentation (#582)\nUpdated the README (#585)\nNumber of numerical issues is now printed to the log (#586)","category":"page"},{"location":"changelog/#v1.1.3-(April-2,-2023)","page":"Release notes","title":"v1.1.3 (April 2, 2023)","text":"","category":"section"},{"location":"changelog/#Other-18","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed typo in Example: deterministic to stochastic tutorial (#578)\nFixed typo in documentation of SDDP.simulate (#577)","category":"page"},{"location":"changelog/#v1.1.2-(March-18,-2023)","page":"Release notes","title":"v1.1.2 (March 18, 2023)","text":"","category":"section"},{"location":"changelog/#Other-19","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added Example: deterministic to stochastic tutorial (#572)","category":"page"},{"location":"changelog/#v1.1.1-(March-16,-2023)","page":"Release notes","title":"v1.1.1 (March 16, 2023)","text":"","category":"section"},{"location":"changelog/#Other-20","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed email in Project.toml\nAdded notebook to documentation tutorials (#571)","category":"page"},{"location":"changelog/#v1.1.0-(January-12,-2023)","page":"Release notes","title":"v1.1.0 (January 12, 2023)","text":"","category":"section"},{"location":"changelog/#Added-9","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added the node_name_parser argument to SDDP.write_cuts_to_file and added the option to skip nodes in SDDP.read_cuts_from_file (#565)","category":"page"},{"location":"changelog/#v1.0.0-(January-3,-2023)","page":"Release notes","title":"v1.0.0 (January 3, 2023)","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Although we're bumping MAJOR version, this is a non-breaking release. Going forward:","category":"page"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"New features will bump the MINOR version\nBug fixes, maintenance, and documentation updates will bump the PATCH version\nWe will support only the Long Term Support (currently v1.6.7) and the latest patch (currently v1.8.4) releases of Julia. Updates to the LTS version will bump the MINOR version\nUpdates to the compat bounds of package dependencies will bump the PATCH version.","category":"page"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"We do not intend any breaking changes to the public API, which would require a new MAJOR release. The public API is everything defined in the documentation. Anything not in the documentation is considered private and may change in any PATCH release.","category":"page"},{"location":"changelog/#Added-10","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added num_nodes argument to SDDP.UnicyclicGraph (#562)\nAdded support for passing an optimizer to SDDP.Asynchronous (#545)","category":"page"},{"location":"changelog/#Other-21","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated Plotting tools to use live plots (#563)\nAdded vale as a linter (#565)\nImproved documentation for initializing a parallel scheme (#566)","category":"page"},{"location":"changelog/#v0.4.9-(January-3,-2023)","page":"Release notes","title":"v0.4.9 (January 3, 2023)","text":"","category":"section"},{"location":"changelog/#Added-11","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.UnicyclicGraph (#556)","category":"page"},{"location":"changelog/#Other-22","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added tutorial on Markov Decision Processes (#556)\nAdded two-stage newsvendor tutorial (#557)\nRefactored the layout of the documentation (#554) (#555)\nUpdated copyright to 2023 (#558)\nFixed errors in the documentation (#561)","category":"page"},{"location":"changelog/#v0.4.8-(December-19,-2022)","page":"Release notes","title":"v0.4.8 (December 19, 2022)","text":"","category":"section"},{"location":"changelog/#Added-12","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added terminate_on_cycle option to SDDP.Historical (#549)\nAdded include_last_node option to SDDP.DefaultForwardPass (#547)","category":"page"},{"location":"changelog/#Fixed-13","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Reverted then fixed (#531) because it failed to account for problems with integer variables (#546) (#551)","category":"page"},{"location":"changelog/#v0.4.7-(December-17,-2022)","page":"Release notes","title":"v0.4.7 (December 17, 2022)","text":"","category":"section"},{"location":"changelog/#Added-13","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added initial_node support to InSampleMonteCarlo and OutOfSampleMonteCarlo (#535)","category":"page"},{"location":"changelog/#Fixed-14","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Rethrow InterruptException when solver is interrupted (#534)\nFixed numerical recovery when we need dual solutions (#531) (Thanks @bfpc)\nFixed re-using the dashboard = true option between solves (#538)\nFixed bug when no @stageobjective is set (now defaults to 0.0) (#539)\nFixed errors thrown when invalid inputs are provided to add_objective_state (#540)","category":"page"},{"location":"changelog/#Other-23","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Drop support for Julia versions prior to 1.6 (#533)\nUpdated versions of dependencies (#522) (#533)\nSwitched to HiGHS in the documentation and tests (#533)\nAdded license headers (#519)\nFixed link in air conditioning example (#521) (Thanks @conema)\nClarified variable naming in deterministic equivalent (#525) (Thanks @lucasprocessi)\nAdded this change log (#536)\nCuts are now written to model.cuts.json when numerical instability is discovered. This can aid debugging because it allows to you reload the cuts as of the iteration that caused the numerical issue (#537)","category":"page"},{"location":"changelog/#v0.4.6-(March-25,-2022)","page":"Release notes","title":"v0.4.6 (March 25, 2022)","text":"","category":"section"},{"location":"changelog/#Other-24","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Updated to JuMP v1.0 (#517)","category":"page"},{"location":"changelog/#v0.4.5-(March-9,-2022)","page":"Release notes","title":"v0.4.5 (March 9, 2022)","text":"","category":"section"},{"location":"changelog/#Fixed-15","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed issue with set_silent in a subproblem (#510)","category":"page"},{"location":"changelog/#Other-25","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed many typos (#500) (#501) (#506) (#511) (Thanks @bfpc)\nUpdate to JuMP v0.23 (#514)\nAdded auto-regressive tutorial (#507)","category":"page"},{"location":"changelog/#v0.4.4-(December-11,-2021)","page":"Release notes","title":"v0.4.4 (December 11, 2021)","text":"","category":"section"},{"location":"changelog/#Added-14","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added BanditDuality (#471)\nAdded benchmark scripts (#475) (#476) (#490)\nwrite_cuts_to_file now saves visited states (#468)","category":"page"},{"location":"changelog/#Fixed-16","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed BoundStalling in a deterministic policy (#470) (#474)\nFixed magnitude warning with zero coefficients (#483)","category":"page"},{"location":"changelog/#Other-26","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Improvements to LagrangianDuality (#481) (#482) (#487)\nImprovements to StrengthenedConicDuality (#486)\nSwitch to functional form for the tests (#478)\nFixed typos (#472) (Thanks @vfdev-5)\nUpdate to JuMP v0.22 (#498)","category":"page"},{"location":"changelog/#v0.4.3-(August-31,-2021)","page":"Release notes","title":"v0.4.3 (August 31, 2021)","text":"","category":"section"},{"location":"changelog/#Added-15","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added biobjective solver (#462)\nAdded forward_pass_callback (#466)","category":"page"},{"location":"changelog/#Other-27","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Update tutorials and documentation (#459) (#465)\nOrganize how paper materials are stored (#464)","category":"page"},{"location":"changelog/#v0.4.2-(August-24,-2021)","page":"Release notes","title":"v0.4.2 (August 24, 2021)","text":"","category":"section"},{"location":"changelog/#Fixed-17","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed a bug in Lagrangian duality (#457)","category":"page"},{"location":"changelog/#v0.4.1-(August-23,-2021)","page":"Release notes","title":"v0.4.1 (August 23, 2021)","text":"","category":"section"},{"location":"changelog/#Other-28","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Minor changes to our implementation of LagrangianDuality (#454) (#455)","category":"page"},{"location":"changelog/#v0.4.0-(August-17,-2021)","page":"Release notes","title":"v0.4.0 (August 17, 2021)","text":"","category":"section"},{"location":"changelog/#Breaking","page":"Release notes","title":"Breaking","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"A large refactoring for how we handle stochastic integer programs. This added support for things like SDDP.ContinuousConicDuality and SDDP.LagrangianDuality. It was breaking because we removed the integrality_handler argument to PolicyGraph. (#449) (#453)","category":"page"},{"location":"changelog/#Other-29","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#447) (#448) (#450)","category":"page"},{"location":"changelog/#v0.3.17-(July-6,-2021)","page":"Release notes","title":"v0.3.17 (July 6, 2021)","text":"","category":"section"},{"location":"changelog/#Added-16","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.PSRSamplingScheme (#426)","category":"page"},{"location":"changelog/#Other-30","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Display more model attributes (#438)\nDocumentation improvements (#433) (#437) (#439)","category":"page"},{"location":"changelog/#v0.3.16-(June-17,-2021)","page":"Release notes","title":"v0.3.16 (June 17, 2021)","text":"","category":"section"},{"location":"changelog/#Added-17","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.RiskAdjustedForwardPass (#413)\nAllow SDDP.Historical to sample sequentially (#420)","category":"page"},{"location":"changelog/#Other-31","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Update risk measure docstrings (#418)","category":"page"},{"location":"changelog/#v0.3.15-(June-1,-2021)","page":"Release notes","title":"v0.3.15 (June 1, 2021)","text":"","category":"section"},{"location":"changelog/#Added-18","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added SDDP.StoppingChain","category":"page"},{"location":"changelog/#Fixed-18","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed scoping bug in SDDP.@stageobjective (#407)\nFixed a bug when the initial point is infeasible (#411)\nSet subproblems to silent by default (#409)","category":"page"},{"location":"changelog/#Other-32","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Add JuliaFormatter (#412)\nDocumentation improvements (#406) (#408)","category":"page"},{"location":"changelog/#v0.3.14-(March-30,-2021)","page":"Release notes","title":"v0.3.14 (March 30, 2021)","text":"","category":"section"},{"location":"changelog/#Fixed-19","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed O(N^2) behavior in get_same_children (#393)","category":"page"},{"location":"changelog/#v0.3.13-(March-27,-2021)","page":"Release notes","title":"v0.3.13 (March 27, 2021)","text":"","category":"section"},{"location":"changelog/#Fixed-20","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed bug in print.jl\nFixed compat of Reexport (#388)","category":"page"},{"location":"changelog/#v0.3.12-(March-22,-2021)","page":"Release notes","title":"v0.3.12 (March 22, 2021)","text":"","category":"section"},{"location":"changelog/#Added-19","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added problem statistics to header (#385) (#386)","category":"page"},{"location":"changelog/#Fixed-21","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed subtypes in visualization (#384)","category":"page"},{"location":"changelog/#v0.3.11-(March-22,-2021)","page":"Release notes","title":"v0.3.11 (March 22, 2021)","text":"","category":"section"},{"location":"changelog/#Fixed-22","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed constructor in direct mode (#383)","category":"page"},{"location":"changelog/#Other-33","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fix documentation (#379)","category":"page"},{"location":"changelog/#v0.3.10-(February-23,-2021)","page":"Release notes","title":"v0.3.10 (February 23, 2021)","text":"","category":"section"},{"location":"changelog/#Fixed-23","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed seriescolor in publication plot (#376)","category":"page"},{"location":"changelog/#v0.3.9-(February-20,-2021)","page":"Release notes","title":"v0.3.9 (February 20, 2021)","text":"","category":"section"},{"location":"changelog/#Added-20","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Add option to simulate with different incoming state (#372)\nAdded warning for cuts with high dynamic range (#373)","category":"page"},{"location":"changelog/#Fixed-24","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed seriesalpha in publication plot (#375)","category":"page"},{"location":"changelog/#v0.3.8-(January-19,-2021)","page":"Release notes","title":"v0.3.8 (January 19, 2021)","text":"","category":"section"},{"location":"changelog/#Other-34","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#367) (#369) (#370)","category":"page"},{"location":"changelog/#v0.3.7-(January-8,-2021)","page":"Release notes","title":"v0.3.7 (January 8, 2021)","text":"","category":"section"},{"location":"changelog/#Other-35","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#362) (#363) (#365) (#366)\nBump copyright (#364)","category":"page"},{"location":"changelog/#v0.3.6-(December-17,-2020)","page":"Release notes","title":"v0.3.6 (December 17, 2020)","text":"","category":"section"},{"location":"changelog/#Other-36","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fix typos (#358)\nCollapse navigation bar in docs (#359)\nUpdate TagBot.yml (#361)","category":"page"},{"location":"changelog/#v0.3.5-(November-18,-2020)","page":"Release notes","title":"v0.3.5 (November 18, 2020)","text":"","category":"section"},{"location":"changelog/#Other-37","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Update citations (#348)\nSwitch to GitHub actions (#355)","category":"page"},{"location":"changelog/#v0.3.4-(August-25,-2020)","page":"Release notes","title":"v0.3.4 (August 25, 2020)","text":"","category":"section"},{"location":"changelog/#Added-21","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added non-uniform distributionally robust risk measure (#328)\nAdded numerical recovery functions (#330)\nAdded experimental StochOptFormat (#332) (#336) (#337) (#341) (#343) (#344)\nAdded entropic risk measure (#347)","category":"page"},{"location":"changelog/#Other-38","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#327) (#333) (#339) (#340)","category":"page"},{"location":"changelog/#v0.3.3-(June-19,-2020)","page":"Release notes","title":"v0.3.3 (June 19, 2020)","text":"","category":"section"},{"location":"changelog/#Added-22","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added asynchronous support for price and belief states (#325)\nAdded ForwardPass plug-in system (#320)","category":"page"},{"location":"changelog/#Fixed-25","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fix check for probabilities in Markovian graph (#322)","category":"page"},{"location":"changelog/#v0.3.2-(April-6,-2020)","page":"Release notes","title":"v0.3.2 (April 6, 2020)","text":"","category":"section"},{"location":"changelog/#Added-23","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added log_frequency argument to SDDP.train (#307)","category":"page"},{"location":"changelog/#Other-39","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Improve error message in deterministic equivalent (#312)\nUpdate to RecipesBase 1.0 (#313)","category":"page"},{"location":"changelog/#v0.3.1-(February-26,-2020)","page":"Release notes","title":"v0.3.1 (February 26, 2020)","text":"","category":"section"},{"location":"changelog/#Fixed-26","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed filename in integrality_handlers.jl (#304)","category":"page"},{"location":"changelog/#v0.3.0-(February-20,-2020)","page":"Release notes","title":"v0.3.0 (February 20, 2020)","text":"","category":"section"},{"location":"changelog/#Breaking-2","page":"Release notes","title":"Breaking","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Breaking changes to update to JuMP v0.21 (#300).","category":"page"},{"location":"changelog/#v0.2.4-(February-7,-2020)","page":"Release notes","title":"v0.2.4 (February 7, 2020)","text":"","category":"section"},{"location":"changelog/#Added-24","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added a counter for the number of total subproblem solves (#301)","category":"page"},{"location":"changelog/#Other-40","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Update formatter (#298)\nAdded tests (#299)","category":"page"},{"location":"changelog/#v0.2.3-(January-24,-2020)","page":"Release notes","title":"v0.2.3 (January 24, 2020)","text":"","category":"section"},{"location":"changelog/#Added-25","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added support for convex risk measures (#294)","category":"page"},{"location":"changelog/#Fixed-27","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed bug when subproblem is infeasible (#296)\nFixed bug in deterministic equivalent (#297)","category":"page"},{"location":"changelog/#Other-41","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added example from IJOC paper (#293)","category":"page"},{"location":"changelog/#v0.2.2-(January-10,-2020)","page":"Release notes","title":"v0.2.2 (January 10, 2020)","text":"","category":"section"},{"location":"changelog/#Fixed-28","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Fixed flakey time limit in tests (#291)","category":"page"},{"location":"changelog/#Other-42","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Removed MathOptFormat.jl (#289)\nUpdate copyright (#290)","category":"page"},{"location":"changelog/#v0.2.1-(December-19,-2019)","page":"Release notes","title":"v0.2.1 (December 19, 2019)","text":"","category":"section"},{"location":"changelog/#Added-26","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added support for approximating a Markov lattice (#282) (#285)\nAdd tools for visualizing the value function (#272) (#286)\nWrite .mof.json files on error (#284)","category":"page"},{"location":"changelog/#Other-43","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Improve documentation (#281) (#283)\nUpdate tests for Julia 1.3 (#287)","category":"page"},{"location":"changelog/#v0.2.0-(December-16,-2019)","page":"Release notes","title":"v0.2.0 (December 16, 2019)","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"This version added the asynchronous parallel implementation with a few minor breaking changes in how we iterated internally. It didn't break basic user-facing models, only implementations that implemented some of the extension features. It probably could have been a v1.1 release.","category":"page"},{"location":"changelog/#Added-27","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Added asynchronous parallel implementation (#277)\nAdded roll-out algorithm for cyclic graphs (#279)","category":"page"},{"location":"changelog/#Other-44","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Improved error messages in PolicyGraph (#271)\nAdded JuliaFormatter (#273) (#276)\nFixed compat bounds (#274) (#278)\nAdded documentation for simulating non-standard graphs (#280)","category":"page"},{"location":"changelog/#v0.1.0-(October-17,-2019)","page":"Release notes","title":"v0.1.0 (October 17, 2019)","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"A complete rewrite of SDDP.jl based on the policy graph framework. This was essentially a new package. It has minimal code in common with the previous implementation.","category":"page"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Development started on September 28, 2018 in Kokako.jl, and the code was merged into SDDP.jl on March 14, 2019.","category":"page"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"The pull request SDDP.jl#180 lists the 29 issues that the rewrite closed.","category":"page"},{"location":"changelog/#v0.0.1-(April-18,-2018)","page":"Release notes","title":"v0.0.1 (April 18, 2018)","text":"","category":"section"},{"location":"changelog/","page":"Release notes","title":"Release notes","text":"Initial release. Development had been underway since January 22, 2016 in the StochDualDynamicProgram.jl repository. The last development commit there was April 5, 2017. Work then continued in this repository for a year before the first tagged release.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"EditURL = \"example_newsvendor.jl\"","category":"page"},{"location":"tutorial/example_newsvendor/#Example:-two-stage-newsvendor","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"The purpose of this tutorial is to demonstrate how to model and solve a two-stage stochastic program.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"It is based on the Two stage stochastic programs tutorial in JuMP.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"This tutorial uses the following packages","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"using JuMP\nusing SDDP\nimport Distributions\nimport ForwardDiff\nimport HiGHS\nimport Plots\nimport StatsPlots\nimport Statistics","category":"page"},{"location":"tutorial/example_newsvendor/#Background","page":"Example: two-stage newsvendor","title":"Background","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"The data for this problem is:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"D = Distributions.TriangularDist(150.0, 250.0, 200.0)\nN = 100\nd = sort!(rand(D, N));\nΩ = 1:N\nP = fill(1 / N, N);\nStatsPlots.histogram(d; bins = 20, label = \"\", xlabel = \"Demand\")","category":"page"},{"location":"tutorial/example_newsvendor/#Kelley's-cutting-plane-algorithm","page":"Example: two-stage newsvendor","title":"Kelley's cutting plane algorithm","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Kelley's cutting plane algorithm is an iterative method for maximizing concave functions. Given a concave function f(x), Kelley's constructs an outer-approximation of the function at the minimum by a set of first-order Taylor series approximations (called cuts) constructed at a set of points k = 1ldotsK:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"beginaligned\nf^K = maxlimits_theta in mathbbR x in mathbbR^N theta\n theta le f(x_k) + nabla f(x_k)^top (x - x_k)quad k=1ldotsK\n theta le M\nendaligned","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"where M is a sufficiently large number that is an upper bound for f over the domain of x.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Kelley's cutting plane algorithm is a structured way of choosing points x_k to visit, so that as more cuts are added:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"lim_K rightarrow infty f^K = maxlimits_x in mathbbR^N f(x)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"However, before we introduce the algorithm, we need to introduce some bounds.","category":"page"},{"location":"tutorial/example_newsvendor/#Bounds","page":"Example: two-stage newsvendor","title":"Bounds","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"By convexity, f(x) le f^K for all x. Thus, if x^* is a maximizer of f, then at any point in time we can construct an upper bound for f(x^*) by solving f^K.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Moreover, we can use the primal solutions x_k^* returned by solving f^k to evaluate f(x_k^*) to generate a lower bound.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Therefore, maxlimits_k=1ldotsK f(x_k^*) le f(x^*) le f^K.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"When the lower bound is sufficiently close to the upper bound, we can terminate the algorithm and declare that we have found an solution that is close to optimal.","category":"page"},{"location":"tutorial/example_newsvendor/#Implementation","page":"Example: two-stage newsvendor","title":"Implementation","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Here is pseudo-code fo the Kelley algorithm:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Take as input a convex function f(x) and a iteration limit K_max. Set K = 1, and initialize f^K-1. Set lb = -infty and ub = infty.\nSolve f^K-1 to obtain a candidate solution x_K.\nUpdate ub = f^K-1 and lb = maxlb f(x_K).\nAdd a cut theta ge f(x_K) + nabla fleft(x_Kright)^top (x - x_K) to form f^K.\nIncrement K.\nIf K K_max or ub - lb epsilon, STOP, otherwise, go to step 2.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"And here's a complete implementation:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"function kelleys_cutting_plane(\n # The function to be minimized.\n f::Function,\n # The gradient of `f`. By default, we use automatic differentiation to\n # compute the gradient of f so the user doesn't have to!\n ∇f::Function = x -> ForwardDiff.gradient(f, x);\n # The number of arguments to `f`.\n input_dimension::Int,\n # An upper bound for the function `f` over its domain.\n upper_bound::Float64,\n # The number of iterations to run Kelley's algorithm for before stopping.\n iteration_limit::Int,\n # The absolute tolerance ϵ to use for convergence.\n tolerance::Float64 = 1e-6,\n)\n # Step (1):\n K = 1\n model = JuMP.Model(HiGHS.Optimizer)\n JuMP.set_silent(model)\n JuMP.@variable(model, θ <= upper_bound)\n JuMP.@variable(model, x[1:input_dimension])\n JuMP.@objective(model, Max, θ)\n x_k = fill(NaN, input_dimension)\n lower_bound, upper_bound = -Inf, Inf\n while true\n # Step (2):\n JuMP.optimize!(model)\n x_k .= JuMP.value.(x)\n # Step (3):\n upper_bound = JuMP.objective_value(model)\n lower_bound = min(upper_bound, f(x_k))\n println(\"K = $K : $(lower_bound) <= f(x*) <= $(upper_bound)\")\n # Step (4):\n JuMP.@constraint(model, θ <= f(x_k) + ∇f(x_k)' * (x .- x_k))\n # Step (5):\n K = K + 1\n # Step (6):\n if K > iteration_limit\n println(\"-- Termination status: iteration limit --\")\n break\n elseif abs(upper_bound - lower_bound) < tolerance\n println(\"-- Termination status: converged --\")\n break\n end\n end\n println(\"Found solution: x_K = \", x_k)\n return\nend","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Let's run our algorithm to see what happens:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"kelleys_cutting_plane(;\n input_dimension = 2,\n upper_bound = 10.0,\n iteration_limit = 20,\n) do x\n return -(x[1] - 1)^2 + -(x[2] + 2)^2 + 1.0\nend","category":"page"},{"location":"tutorial/example_newsvendor/#L-Shaped-theory","page":"Example: two-stage newsvendor","title":"L-Shaped theory","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"The L-Shaped method is a way of solving two-stage stochastic programs by Benders' decomposition. It takes the problem:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"beginaligned\nV = maxlimits_xy_omega -2x + mathbbE_omega5y_omega - 01(x - y_omega) \n y_omega le x quad forall omega in Omega \n 0 le y_omega le d_omega quad forall omega in Omega \n x ge 0\nendaligned","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"and decomposes it into a second-stage problem:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"beginaligned\nV_2(barx d_omega) = maxlimits_xx^primey_omega 5y_omega - x^prime \n y_omega le x \n x^prime = x - y_omega \n 0 le y_omega le d_omega \n x = barx lambda\nendaligned","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"and a first-stage problem:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"beginaligned\nV = maxlimits_xtheta -2x + theta \n theta le mathbbE_omegaV_2(x omega) \n x ge 0\nendaligned","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Then, because V_2 is convex with respect to barx for fixed omega, we can use a set of feasible points x^k construct an outer approximation:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"beginaligned\nV^K = maxlimits_xtheta -2x + theta \n theta le mathbbE_omegaV_2(x^k omega) + nabla V_2(x^k omega)^top(x - x^k) quad k = 1ldotsK\n x ge 0 \n theta le M\nendaligned","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"where M is an upper bound on possible values of V_2 so that the problem has a bounded solution.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"It is also useful to see that because barx appears only on the right-hand side of a linear program, nabla V_2(x^k omega) = lambda^k.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Ignoring how we choose x^k for now, we can construct a lower and upper bound on the optimal solution:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"-2x^K + mathbbE_omegaV_2(x^K omega) = underbarV le V le overlineV = V^K","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Thus, we need some way of cleverly choosing a sequence of x^k so that the lower bound converges to the upper bound.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Start with K=1\nSolve V^K-1 to get x^K\nSet overlineV = V^k\nSolve V_2(x^K omega) for all omega and store the optimal objective value and dual solution lambda^K\nSet underbarV = -2x^K + mathbbE_omegaV_2(x^k omega)\nIf underbarV approx overlineV, STOP\nAdd new constraint theta le mathbbE_omegaV_2(x^K omega) +lambda^K (x - x^K)\nIncrement K, GOTO 2","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"The next section implements this algorithm in Julia.","category":"page"},{"location":"tutorial/example_newsvendor/#L-Shaped-implementation","page":"Example: two-stage newsvendor","title":"L-Shaped implementation","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Here's a function to compute the second-stage problem;","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"function solve_second_stage(x̅, d_ω)\n model = Model(HiGHS.Optimizer)\n set_silent(model)\n @variable(model, x_in)\n @variable(model, x_out >= 0)\n fix(x_in, x̅)\n @variable(model, 0 <= u_sell <= d_ω)\n @constraint(model, x_out == x_in - u_sell)\n @constraint(model, u_sell <= x_in)\n @objective(model, Max, 5 * u_sell - 0.1 * x_out)\n optimize!(model)\n return (\n V = objective_value(model),\n λ = reduced_cost(x_in),\n x = value(x_out),\n u = value(u_sell),\n )\nend\n\nsolve_second_stage(200, 170)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Here's the first-stage subproblem:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"model = Model(HiGHS.Optimizer)\nset_silent(model)\n@variable(model, x_in == 0)\n@variable(model, x_out >= 0)\n@variable(model, u_make >= 0)\n@constraint(model, x_out == x_in + u_make)\nM = 5 * maximum(d)\n@variable(model, θ <= M)\n@objective(model, Max, -2 * u_make + θ)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Importantly, to ensure we have a bounded solution, we need to add an upper bound to the variable θ.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"kIterationLimit = 100\nfor k in 1:kIterationLimit\n println(\"Solving iteration k = $k\")\n # Step 2\n optimize!(model)\n xᵏ = value(x_out)\n println(\" xᵏ = $xᵏ\")\n # Step 3\n ub = objective_value(model)\n println(\" V̅ = $ub\")\n # Step 4\n ret = [solve_second_stage(xᵏ, d[ω]) for ω in Ω]\n # Step 5\n lb = value(-2 * u_make) + sum(p * r.V for (p, r) in zip(P, ret))\n println(\" V̲ = $lb\")\n # Step 6\n if ub - lb < 1e-6\n println(\"Terminating with near-optimal solution\")\n break\n end\n # Step 7\n c = @constraint(\n model,\n θ <= sum(p * (r.V + r.λ * (x_out - xᵏ)) for (p, r) in zip(P, ret)),\n )\n println(\" Added cut: $c\")\nend","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"To get the first-stage solution, we do:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"optimize!(model)\nxᵏ = value(x_out)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"To compute a second-stage solution, we do:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"solve_second_stage(xᵏ, 170.0)","category":"page"},{"location":"tutorial/example_newsvendor/#Policy-Graph","page":"Example: two-stage newsvendor","title":"Policy Graph","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Now let's see how we can formulate and train a policy for the two-stage newsvendor problem using SDDP.jl. Under the hood, SDDP.jl implements the exact algorithm that we just wrote by hand.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"model = SDDP.LinearPolicyGraph(;\n stages = 2,\n sense = :Max,\n upper_bound = 5 * maximum(d), # The `M` in θ <= M\n optimizer = HiGHS.Optimizer,\n) do subproblem::JuMP.Model, stage::Int\n @variable(subproblem, x >= 0, SDDP.State, initial_value = 0)\n if stage == 1\n @variable(subproblem, u_make >= 0)\n @constraint(subproblem, x.out == x.in + u_make)\n @stageobjective(subproblem, -2 * u_make)\n else\n @variable(subproblem, u_sell >= 0)\n @constraint(subproblem, u_sell <= x.in)\n @constraint(subproblem, x.out == x.in - u_sell)\n SDDP.parameterize(subproblem, d, P) do ω\n set_upper_bound(u_sell, ω)\n return\n end\n @stageobjective(subproblem, 5 * u_sell - 0.1 * x.out)\n end\n return\nend\n\nSDDP.train(model; log_every_iteration = true)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"One way to query the optimal policy is with SDDP.DecisionRule:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"first_stage_rule = SDDP.DecisionRule(model; node = 1)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"solution_1 = SDDP.evaluate(first_stage_rule; incoming_state = Dict(:x => 0.0))","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Here's the second stage:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"second_stage_rule = SDDP.DecisionRule(model; node = 2)\nsolution = SDDP.evaluate(\n second_stage_rule;\n incoming_state = Dict(:x => solution_1.outgoing_state[:x]),\n noise = 170.0, # A value of d[ω], can be out-of-sample.\n controls_to_record = [:u_sell],\n)","category":"page"},{"location":"tutorial/example_newsvendor/#Simulation","page":"Example: two-stage newsvendor","title":"Simulation","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Querying the decision rules is tedious. It's often more useful to simulate the policy:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"simulations = SDDP.simulate(\n model,\n 10, #= number of replications =#\n [:x, :u_sell, :u_make]; #= variables to record =#\n skip_undefined_variables = true,\n);\nnothing #hide","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"simulations is a vector with 10 elements","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"length(simulations)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"and each element is a vector with two elements (one for each stage)","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"length(simulations[1])","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"The first stage contains:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"simulations[1][1]","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"The second stage contains:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"simulations[1][2]","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"We can compute aggregated statistics across the simulations:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"objectives = map(simulations) do simulation\n return sum(data[:stage_objective] for data in simulation)\nend\nμ, t = SDDP.confidence_interval(objectives)\nprintln(\"Simulation ci : $μ ± $t\")","category":"page"},{"location":"tutorial/example_newsvendor/#Risk-aversion-revisited","page":"Example: two-stage newsvendor","title":"Risk aversion revisited","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"SDDP.jl contains a number of risk measures. One example is:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"0.5 * SDDP.Expectation() + 0.5 * SDDP.WorstCase()","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"You can construct a risk-averse policy by passing a risk measure to the risk_measure keyword argument of SDDP.train.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"We can explore how the optimal decision changes with risk by creating a function:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"function solve_newsvendor(risk_measure::SDDP.AbstractRiskMeasure)\n model = SDDP.LinearPolicyGraph(;\n stages = 2,\n sense = :Max,\n upper_bound = 5 * maximum(d),\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n @variable(subproblem, x >= 0, SDDP.State, initial_value = 0)\n if node == 1\n @stageobjective(subproblem, -2 * x.out)\n else\n @variable(subproblem, u_sell >= 0)\n @constraint(subproblem, u_sell <= x.in)\n @constraint(subproblem, x.out == x.in - u_sell)\n SDDP.parameterize(subproblem, d, P) do ω\n set_upper_bound(u_sell, ω)\n return\n end\n @stageobjective(subproblem, 5 * u_sell - 0.1 * x.out)\n end\n return\n end\n SDDP.train(model; risk_measure = risk_measure, print_level = 0)\n first_stage_rule = SDDP.DecisionRule(model; node = 1)\n solution = SDDP.evaluate(first_stage_rule; incoming_state = Dict(:x => 0.0))\n return solution.outgoing_state[:x]\nend","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Now we can see how many units a decision maker would order using CVaR:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"solve_newsvendor(SDDP.CVaR(0.4))","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"as well as a decision-maker who cares only about the worst-case outcome:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"solve_newsvendor(SDDP.WorstCase())","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"In general, the decision-maker will be somewhere between the two extremes. The SDDP.Entropic risk measure is a risk measure that has a single parameter that lets us explore the space of policies between the two extremes. When the parameter is small, the measure acts like SDDP.Expectation, and when it is large, it acts like SDDP.WorstCase.","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Here is what we get if we solve our problem multiple times for different values of the risk aversion parameter gamma:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Γ = [10^i for i in -4:0.5:1]\nbuy = [solve_newsvendor(SDDP.Entropic(γ)) for γ in Γ]\nPlots.plot(\n Γ,\n buy;\n xaxis = :log,\n xlabel = \"Risk aversion parameter γ\",\n ylabel = \"Number of pies to make\",\n legend = false,\n)","category":"page"},{"location":"tutorial/example_newsvendor/#Things-to-try","page":"Example: two-stage newsvendor","title":"Things to try","text":"","category":"section"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"There are a number of things you can try next:","category":"page"},{"location":"tutorial/example_newsvendor/","page":"Example: two-stage newsvendor","title":"Example: two-stage newsvendor","text":"Experiment with different buy and sales prices\nExperiment with different distributions of demand\nExplore how the optimal policy changes if you use a different risk measure\nWhat happens if you can only buy and sell integer numbers of newspapers? Try this by adding Int to the variable definitions: @variable(subproblem, buy >= 0, Int)\nWhat happens if you use a different upper bound? Try an invalid one like -100, and a very large one like 1e12.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"EditURL = \"theory_intro.jl\"","category":"page"},{"location":"explanation/theory_intro/#Introductory-theory","page":"Introductory theory","title":"Introductory theory","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"note: Note\nThis tutorial is aimed at advanced undergraduates or early-stage graduate students. You don't need prior exposure to stochastic programming! (Indeed, it may be better if you don't, because our approach is non-standard in the literature.)This tutorial is also a living document. If parts are unclear, please open an issue so it can be improved!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"This tutorial will teach you how the stochastic dual dynamic programming algorithm works by implementing a simplified version of the algorithm.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Our implementation is very much a \"vanilla\" version of SDDP; it doesn't have (m)any fancy computational tricks (e.g., the ones included in SDDP.jl) that you need to code a performant or stable version that will work on realistic instances. However, our simplified implementation will work on arbitrary policy graphs, including those with cycles such as infinite horizon problems!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Packages","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"This tutorial uses the following packages. For clarity, we call import PackageName so that we must prefix PackageName. to all functions and structs provided by that package. Everything not prefixed is either part of base Julia, or we wrote it.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"import ForwardDiff\nimport HiGHS\nimport JuMP\nimport Statistics","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"tip: Tip\nYou can follow along by installing the above packages, and copy-pasting the code we will write into a Julia REPL. Alternatively, you can download the Julia .jl file which created this tutorial from GitHub.","category":"page"},{"location":"explanation/theory_intro/#Preliminaries:-background-theory","page":"Introductory theory","title":"Preliminaries: background theory","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Start this tutorial by reading An introduction to SDDP.jl, which introduces the necessary notation and vocabulary that we need for this tutorial.","category":"page"},{"location":"explanation/theory_intro/#Preliminaries:-Kelley's-cutting-plane-algorithm","page":"Introductory theory","title":"Preliminaries: Kelley's cutting plane algorithm","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Kelley's cutting plane algorithm is an iterative method for minimizing convex functions. Given a convex function f(x), Kelley's constructs an under-approximation of the function at the minimum by a set of first-order Taylor series approximations (called cuts) constructed at a set of points k = 1ldotsK:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"beginaligned\nf^K = minlimits_theta in mathbbR x in mathbbR^N theta\n theta ge f(x_k) + fracddxf(x_k)^top (x - x_k)quad k=1ldotsK\n theta ge M\nendaligned","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"where M is a sufficiently large negative number that is a lower bound for f over the domain of x.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Kelley's cutting plane algorithm is a structured way of choosing points x_k to visit, so that as more cuts are added:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"lim_K rightarrow infty f^K = minlimits_x in mathbbR^N f(x)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"However, before we introduce the algorithm, we need to introduce some bounds.","category":"page"},{"location":"explanation/theory_intro/#Bounds","page":"Introductory theory","title":"Bounds","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"By convexity, f^K le f(x) for all x. Thus, if x^* is a minimizer of f, then at any point in time we can construct a lower bound for f(x^*) by solving f^K.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Moreover, we can use the primal solutions x_k^* returned by solving f^k to evaluate f(x_k^*) to generate an upper bound.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Therefore, f^K le f(x^*) le minlimits_k=1ldotsK f(x_k^*).","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"When the lower bound is sufficiently close to the upper bound, we can terminate the algorithm and declare that we have found an solution that is close to optimal.","category":"page"},{"location":"explanation/theory_intro/#Implementation","page":"Introductory theory","title":"Implementation","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Here is pseudo-code fo the Kelley algorithm:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Take as input a convex function f(x) and a iteration limit K_max. Set K = 0, and initialize f^K. Set lb = -infty and ub = infty.\nSolve f^K to obtain a candidate solution x_K+1.\nUpdate lb = f^K and ub = minub f(x_K+1).\nAdd a cut theta ge f(x_K+1) + fracddxfleft(x_K+1right)^top (x - x_K+1) to form f^K+1.\nIncrement K.\nIf K = K_max or ub - lb epsilon, STOP, otherwise, go to step 2.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"And here's a complete implementation:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function kelleys_cutting_plane(\n # The function to be minimized.\n f::Function,\n # The gradient of `f`. By default, we use automatic differentiation to\n # compute the gradient of f so the user doesn't have to!\n dfdx::Function = x -> ForwardDiff.gradient(f, x);\n # The number of arguments to `f`.\n input_dimension::Int,\n # A lower bound for the function `f` over its domain.\n lower_bound::Float64,\n # The number of iterations to run Kelley's algorithm for before stopping.\n iteration_limit::Int,\n # The absolute tolerance ϵ to use for convergence.\n tolerance::Float64 = 1e-6,\n)\n # Step (1):\n K = 0\n model = JuMP.Model(HiGHS.Optimizer)\n JuMP.set_silent(model)\n JuMP.@variable(model, θ >= lower_bound)\n JuMP.@variable(model, x[1:input_dimension])\n JuMP.@objective(model, Min, θ)\n x_k = fill(NaN, input_dimension)\n lower_bound, upper_bound = -Inf, Inf\n while true\n # Step (2):\n JuMP.optimize!(model)\n x_k .= JuMP.value.(x)\n # Step (3):\n lower_bound = JuMP.objective_value(model)\n upper_bound = min(upper_bound, f(x_k))\n println(\"K = $K : $(lower_bound) <= f(x*) <= $(upper_bound)\")\n # Step (4):\n JuMP.@constraint(model, θ >= f(x_k) + dfdx(x_k)' * (x .- x_k))\n # Step (5):\n K = K + 1\n # Step (6):\n if K == iteration_limit\n println(\"-- Termination status: iteration limit --\")\n break\n elseif abs(upper_bound - lower_bound) < tolerance\n println(\"-- Termination status: converged --\")\n break\n end\n end\n println(\"Found solution: x_K = \", x_k)\n return\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Let's run our algorithm to see what happens:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"kelleys_cutting_plane(;\n input_dimension = 2,\n lower_bound = 0.0,\n iteration_limit = 20,\n) do x\n return (x[1] - 1)^2 + (x[2] + 2)^2 + 1.0\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"warning: Warning\nIt's hard to choose a valid lower bound! If you choose one too loose, the algorithm can take a long time to converge. However, if you choose one so tight that M f(x^*), then you can obtain a suboptimal solution. For a deeper discussion of the implications for SDDP.jl, see Choosing an initial bound.","category":"page"},{"location":"explanation/theory_intro/#Preliminaries:-approximating-the-cost-to-go-term","page":"Introductory theory","title":"Preliminaries: approximating the cost-to-go term","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"In the background theory section, we discussed how you could formulate an optimal policy to a multistage stochastic program using the dynamic programming recursion:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"beginaligned\nV_i(x omega) = minlimits_barx x^prime u C_i(barx u omega) + mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi)\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x\nendaligned","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"where our decision rule, pi_i(x omega), solves this optimization problem and returns a u^* corresponding to an optimal solution. Moreover, we alluded to the fact that the cost-to-go term (the nasty recursive expectation) makes this problem intractable to solve.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"However, if, excluding the cost-to-go term (i.e., the SP formulation), V_i(x omega) can be formulated as a linear program (this also works for convex programs, but the math is more involved), then we can make some progress by noticing that x only appears as a right-hand side term of the fishing constraint barx = x. Therefore, V_i(x cdot) is convex with respect to x for fixed omega. (If you have not seen this result before, try to prove it.)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"The fishing constraint barx = x has an associated dual variable. The economic interpretation of this dual variable is that it represents the change in the objective function if the right-hand side x is increased on the scale of one unit. In other words, and with a slight abuse of notation, it is the value fracddx V_i(x omega). (Because V_i is not differentiable, it is a subgradient instead of a derivative.)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"If we implement the constraint barx = x by setting the lower- and upper bounds of barx to x, then the reduced cost of the decision variable barx is the subgradient, and we do not need to explicitly add the fishing constraint as a row to the constraint matrix.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"tip: Tip\nThe subproblem can have binary and integer variables, but you'll need to use Lagrangian duality to compute a subgradient!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Stochastic dual dynamic programming converts this problem into a tractable form by applying Kelley's cutting plane algorithm to the V_j functions in the cost-to-go term:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"beginaligned\nV_i^K(x omega) = minlimits_barx x^prime u C_i(barx u omega) + theta\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x \n theta ge mathbbE_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi) + fracddx^primeV_j^k(x^prime_k varphi)^top (x^prime - x^prime_k)rightquad k=1ldotsK \n theta ge M\nendaligned","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"All we need now is a way of generating these cutting planes in an iterative manner. Before we get to that though, let's start writing some code.","category":"page"},{"location":"explanation/theory_intro/#Implementation:-modeling","page":"Introductory theory","title":"Implementation: modeling","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Let's make a start by defining the problem structure. Like SDDP.jl, we need a few things:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"A description of the structure of the policy graph: how many nodes there are, and the arcs linking the nodes together with their corresponding probabilities.\nA JuMP model for each node in the policy graph.\nA way to identify the incoming and outgoing state variables of each node.\nA description of the random variable, as well as a function that we can call that will modify the JuMP model to reflect the realization of the random variable.\nA decision variable to act as the approximated cost-to-go term.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"warning: Warning\nIn the interests of brevity, there is minimal error checking. Think about all the different ways you could break the code!","category":"page"},{"location":"explanation/theory_intro/#Structs","page":"Introductory theory","title":"Structs","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"The first struct we are going to use is a State struct that will wrap an incoming and outgoing state variable:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"struct State\n in::JuMP.VariableRef\n out::JuMP.VariableRef\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Next, we need a struct to wrap all of the uncertainty within a node:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"struct Uncertainty\n parameterize::Function\n Ω::Vector{Any}\n P::Vector{Float64}\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"parameterize is a function which takes a realization of the random variable omegainOmega and updates the subproblem accordingly. The finite discrete random variable is defined by the vectors Ω and P, so that the random variable takes the value Ω[i] with probability P[i]. As such, P should sum to 1. (We don't check this here, but we should; we do in SDDP.jl.)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Now we have two building blocks, we can declare the structure of each node:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"struct Node\n subproblem::JuMP.Model\n states::Dict{Symbol,State}\n uncertainty::Uncertainty\n cost_to_go::JuMP.VariableRef\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"subproblem is going to be the JuMP model that we build at each node.\nstates is a dictionary that maps a symbolic name of a state variable to a State object wrapping the incoming and outgoing state variables in subproblem.\nuncertainty is an Uncertainty object described above.\ncost_to_go is a JuMP variable that approximates the cost-to-go term.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Finally, we define a simplified policy graph as follows:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"struct PolicyGraph\n nodes::Vector{Node}\n arcs::Vector{Dict{Int,Float64}}\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"There is a vector of nodes, as well as a data structure for the arcs. arcs is a vector of dictionaries, where arcs[i][j] gives the probability of transitioning from node i to node j, if an arc exists.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"To simplify things, we will assume that the root node transitions to node 1 with probability 1, and there are no other incoming arcs to node 1. Notably, we can still define cyclic graphs though!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"We also define a nice show method so that we don't accidentally print a large amount of information to the screen when creating a model:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function Base.show(io::IO, model::PolicyGraph)\n println(io, \"A policy graph with $(length(model.nodes)) nodes\")\n println(io, \"Arcs:\")\n for (from, arcs) in enumerate(model.arcs)\n for (to, probability) in arcs\n println(io, \" $(from) => $(to) w.p. $(probability)\")\n end\n end\n return\nend","category":"page"},{"location":"explanation/theory_intro/#Functions","page":"Introductory theory","title":"Functions","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Now we have some basic types, let's implement some functions so that the user can create a model.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"First, we need an example of a function that the user will provide. Like SDDP.jl, this takes an empty subproblem, and a node index, in this case t::Int. You could change this function to change the model, or define a new one later in the code.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"We're going to copy the example from An introduction to SDDP.jl, with some minor adjustments for the fact we don't have many of the bells and whistles of SDDP.jl. You can probably see how some of the SDDP.jl functionality like @stageobjective and SDDP.parameterize help smooth some of the usability issues like needing to construct both the incoming and outgoing state variables, or needing to explicitly declare return states, uncertainty.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function subproblem_builder(subproblem::JuMP.Model, t::Int)\n # Define the state variables. Note how we fix the incoming state to the\n # initial state variable regardless of `t`! This isn't strictly necessary;\n # it only matters that we do it for the first node.\n JuMP.@variable(subproblem, volume_in == 200)\n JuMP.@variable(subproblem, 0 <= volume_out <= 200)\n states = Dict(:volume => State(volume_in, volume_out))\n # Define the control variables.\n JuMP.@variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n # Define the constraints\n JuMP.@constraints(\n subproblem,\n begin\n volume_out == volume_in + inflow - hydro_generation - hydro_spill\n demand_constraint, thermal_generation + hydro_generation == 150.0\n end\n )\n # Define the objective for each stage `t`. Note that we can use `t` as an\n # index for t = 1, 2, 3.\n fuel_cost = [50.0, 100.0, 150.0]\n JuMP.@objective(subproblem, Min, fuel_cost[t] * thermal_generation)\n # Finally, we define the uncertainty object. Because this is a simplified\n # implementation of SDDP, we shall politely ask the user to only modify the\n # constraints, and not the objective function! (Not that it changes the\n # algorithm, we just have to add more information to keep track of things.)\n uncertainty = Uncertainty([0.0, 50.0, 100.0], [1 / 3, 1 / 3, 1 / 3]) do ω\n return JuMP.fix(inflow, ω)\n end\n return states, uncertainty\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"The next function we need to define is the analog of SDDP.PolicyGraph. It should be pretty readable.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function PolicyGraph(\n subproblem_builder::Function;\n graph::Vector{Dict{Int,Float64}},\n lower_bound::Float64,\n optimizer,\n)\n nodes = Node[]\n for t in 1:length(graph)\n # Create a model.\n model = JuMP.Model(optimizer)\n JuMP.set_silent(model)\n # Use the provided function to build out each subproblem. The user's\n # function returns a dictionary mapping `Symbol`s to `State` objects,\n # and an `Uncertainty` object.\n states, uncertainty = subproblem_builder(model, t)\n # Now add the cost-to-go terms:\n JuMP.@variable(model, cost_to_go >= lower_bound)\n obj = JuMP.objective_function(model)\n JuMP.@objective(model, Min, obj + cost_to_go)\n # If there are no outgoing arcs, the cost-to-go is 0.0.\n if length(graph[t]) == 0\n JuMP.fix(cost_to_go, 0.0; force = true)\n end\n push!(nodes, Node(model, states, uncertainty, cost_to_go))\n end\n return PolicyGraph(nodes, graph)\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Then, we can create a model using the subproblem_builder function we defined earlier:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"model = PolicyGraph(\n subproblem_builder;\n graph = [Dict(2 => 1.0), Dict(3 => 1.0), Dict{Int,Float64}()],\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n)","category":"page"},{"location":"explanation/theory_intro/#Implementation:-helpful-samplers","page":"Introductory theory","title":"Implementation: helpful samplers","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Before we get properly coding the solution algorithm, it's also going to be useful to have a function that samples a realization of the random variable defined by Ω and P.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function sample_uncertainty(uncertainty::Uncertainty)\n r = rand()\n for (p, ω) in zip(uncertainty.P, uncertainty.Ω)\n r -= p\n if r < 0.0\n return ω\n end\n end\n return error(\"We should never get here because P should sum to 1.0.\")\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"note: Note\nrand() samples a uniform random variable in [0, 1).","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"For example:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"for i in 1:3\n println(\"ω = \", sample_uncertainty(model.nodes[1].uncertainty))\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"It's also going to be useful to define a function that generates a random walk through the nodes of the graph:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function sample_next_node(model::PolicyGraph, current::Int)\n if length(model.arcs[current]) == 0\n # No outgoing arcs!\n return nothing\n else\n r = rand()\n for (to, probability) in model.arcs[current]\n r -= probability\n if r < 0.0\n return to\n end\n end\n # We looped through the outgoing arcs and still have probability left\n # over! This means we've hit an implicit \"zero\" node.\n return nothing\n end\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"For example:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"for i in 1:3\n # We use `repr` to print the next node, because `sample_next_node` can\n # return `nothing`.\n println(\"Next node from $(i) = \", repr(sample_next_node(model, i)))\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"This is a little boring, because our graph is simple. However, more complicated graphs will generate more interesting trajectories!","category":"page"},{"location":"explanation/theory_intro/#Implementation:-the-forward-pass","page":"Introductory theory","title":"Implementation: the forward pass","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Recall that, after approximating the cost-to-go term, we need a way of generating the cuts. As the first step, we need a way of generating candidate solutions x_k^prime. However, unlike the Kelley's example, our functions V_j^k(x^prime varphi) need two inputs: an outgoing state variable and a realization of the random variable.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"One way of getting these inputs is just to pick a random (feasible) value. However, in doing so, we might pick outgoing state variables that we will never see in practice, or we might infrequently pick outgoing state variables that we will often see in practice. Therefore, a better way of generating the inputs is to use a simulation of the policy, which we call the forward pass.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"The forward pass walks the policy graph from start to end, transitioning randomly along the arcs. At each node, it observes a realization of the random variable and solves the approximated subproblem to generate a candidate outgoing state variable x_k^prime. The outgoing state variable is passed as the incoming state variable to the next node in the trajectory.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function forward_pass(model::PolicyGraph, io::IO = stdout)\n println(io, \"| Forward Pass\")\n # First, get the value of the state at the root node (e.g., x_R).\n incoming_state =\n Dict(k => JuMP.fix_value(v.in) for (k, v) in model.nodes[1].states)\n # `simulation_cost` is an accumlator that is going to sum the stage-costs\n # incurred over the forward pass.\n simulation_cost = 0.0\n # We also need to record the nodes visited and resultant outgoing state\n # variables so we can pass them to the backward pass.\n trajectory = Tuple{Int,Dict{Symbol,Float64}}[]\n # Now's the meat of the forward pass: beginning at the first node:\n t = 1\n while t !== nothing\n node = model.nodes[t]\n println(io, \"| | Visiting node $(t)\")\n # Sample the uncertainty:\n ω = sample_uncertainty(node.uncertainty)\n println(io, \"| | | ω = \", ω)\n # Parameterizing the subproblem using the user-provided function:\n node.uncertainty.parameterize(ω)\n println(io, \"| | | x = \", incoming_state)\n # Update the incoming state variable:\n for (k, v) in incoming_state\n JuMP.fix(node.states[k].in, v; force = true)\n end\n # Now solve the subproblem and check we found an optimal solution:\n JuMP.optimize!(node.subproblem)\n if JuMP.termination_status(node.subproblem) != JuMP.MOI.OPTIMAL\n error(\"Something went terribly wrong!\")\n end\n # Compute the outgoing state variables:\n outgoing_state = Dict(k => JuMP.value(v.out) for (k, v) in node.states)\n println(io, \"| | | x′ = \", outgoing_state)\n # We also need to compute the stage cost to add to our\n # `simulation_cost` accumulator:\n stage_cost =\n JuMP.objective_value(node.subproblem) - JuMP.value(node.cost_to_go)\n simulation_cost += stage_cost\n println(io, \"| | | C(x, u, ω) = \", stage_cost)\n # As a penultimate step, set the outgoing state of stage t and the\n # incoming state of stage t + 1, and add the node to the trajectory.\n incoming_state = outgoing_state\n push!(trajectory, (t, outgoing_state))\n # Finally, sample a new node to step to. If `t === nothing`, the\n # `while` loop will break.\n t = sample_next_node(model, t)\n end\n return trajectory, simulation_cost\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Let's take a look at one forward pass:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"trajectory, simulation_cost = forward_pass(model);\nnothing #hide","category":"page"},{"location":"explanation/theory_intro/#Implementation:-the-backward-pass","page":"Introductory theory","title":"Implementation: the backward pass","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"From the forward pass, we obtained a vector of nodes visited and their corresponding outgoing state variables. Now we need to refine the approximation for each node at the candidate solution for the outgoing state variable. That is, we need to add a new cut:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"theta ge mathbbE_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi) + fracddx^primeV_j^k(x^prime_k varphi)^top (x^prime - x^prime_k)right","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"or alternatively:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"theta ge sumlimits_j in i^+ sumlimits_varphi in Omega_j p_ij p_varphileftV_j^k(x^prime_k varphi) + fracddx^primeV_j^k(x^prime_k varphi)^top (x^prime - x^prime_k)right","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"It doesn't matter what order we visit the nodes to generate these cuts for. For example, we could compute them all in parallel, using the current approximations of V^K_i.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"However, we can be smarter than that.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"If we traverse the list of nodes visited in the forward pass in reverse, then we come to refine the i^th node in the trajectory, we will already have improved the approximation of the (i+1)^th node in the trajectory as well! Therefore, our refinement of the i^th node will be better than if we improved node i first, and then refined node (i+1).","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Because we walk the nodes in reverse, we call this the backward pass.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"info: Info\nIf you're into deep learning, you could view this as the equivalent of back-propagation: the forward pass pushes primal information through the graph (outgoing state variables), and the backward pass pulls dual information (cuts) back through the graph to improve our decisions on the next forward pass.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function backward_pass(\n model::PolicyGraph,\n trajectory::Vector{Tuple{Int,Dict{Symbol,Float64}}},\n io::IO = stdout,\n)\n println(io, \"| Backward pass\")\n # For the backward pass, we walk back up the nodes.\n for i in reverse(1:length(trajectory))\n index, outgoing_states = trajectory[i]\n node = model.nodes[index]\n println(io, \"| | Visiting node $(index)\")\n if length(model.arcs[index]) == 0\n # If there are no children, the cost-to-go is 0.\n println(io, \"| | | Skipping node because the cost-to-go is 0\")\n continue\n end\n # Create an empty affine expression that we will use to build up the\n # right-hand side of the cut expression.\n cut_expression = JuMP.AffExpr(0.0)\n # For each node j ∈ i⁺\n for (j, P_ij) in model.arcs[index]\n next_node = model.nodes[j]\n # Set the incoming state variables of node j to the outgoing state\n # variables of node i\n for (k, v) in outgoing_states\n JuMP.fix(next_node.states[k].in, v; force = true)\n end\n # Then for each realization of φ ∈ Ωⱼ\n for (pφ, φ) in zip(next_node.uncertainty.P, next_node.uncertainty.Ω)\n # Setup and solve for the realization of φ\n println(io, \"| | | Solving φ = \", φ)\n next_node.uncertainty.parameterize(φ)\n JuMP.optimize!(next_node.subproblem)\n # Then prepare the cut `P_ij * pφ * [V + dVdxᵀ(x - x_k)]``\n V = JuMP.objective_value(next_node.subproblem)\n println(io, \"| | | | V = \", V)\n dVdx = Dict(\n k => JuMP.reduced_cost(v.in) for (k, v) in next_node.states\n )\n println(io, \"| | | | dVdx′ = \", dVdx)\n cut_expression += JuMP.@expression(\n node.subproblem,\n P_ij *\n pφ *\n (\n V + sum(\n dVdx[k] * (x.out - outgoing_states[k]) for\n (k, x) in node.states\n )\n ),\n )\n end\n end\n # And then refine the cost-to-go variable by adding the cut:\n c = JuMP.@constraint(node.subproblem, node.cost_to_go >= cut_expression)\n println(io, \"| | | Adding cut : \", c)\n end\n return nothing\nend","category":"page"},{"location":"explanation/theory_intro/#Implementation:-bounds","page":"Introductory theory","title":"Implementation: bounds","text":"","category":"section"},{"location":"explanation/theory_intro/#Lower-bounds","page":"Introductory theory","title":"Lower bounds","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Recall from Kelley's that we can obtain a lower bound for f(x^*) be evaluating f^K. The analogous lower bound for a multistage stochastic program is:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"mathbbE_i in R^+ omega in Omega_iV_i^K(x_R omega) le min_pi mathbbE_i in R^+ omega in Omega_iV_i^pi(x_R omega)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Here's how we compute the lower bound:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function lower_bound(model::PolicyGraph)\n node = model.nodes[1]\n bound = 0.0\n for (p, ω) in zip(node.uncertainty.P, node.uncertainty.Ω)\n node.uncertainty.parameterize(ω)\n JuMP.optimize!(node.subproblem)\n bound += p * JuMP.objective_value(node.subproblem)\n end\n return bound\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"note: Note\nThe implementation is simplified because we assumed that there is only one arc from the root node, and that it pointed to the first node in the vector.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Because we haven't trained a policy yet, the lower bound is going to be very bad:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"lower_bound(model)","category":"page"},{"location":"explanation/theory_intro/#Upper-bounds","page":"Introductory theory","title":"Upper bounds","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"With Kelley's algorithm, we could easily construct an upper bound by evaluating f(x_K). However, it is almost always intractable to evaluate an upper bound for multistage stochastic programs due to the large number of nodes and the nested expectations. Instead, we can perform a Monte Carlo simulation of the policy to build a statistical estimate for the value of mathbbE_i in R^+ omega in Omega_iV_i^pi(x_R omega), where pi is the policy defined by the current approximations V^K_i.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function upper_bound(model::PolicyGraph; replications::Int)\n # Pipe the output to `devnull` so we don't print too much!\n simulations = [forward_pass(model, devnull) for i in 1:replications]\n z = [s[2] for s in simulations]\n μ = Statistics.mean(z)\n tσ = 1.96 * Statistics.std(z) / sqrt(replications)\n return μ, tσ\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"note: Note\nThe width of the confidence interval is incorrect if there are cycles in the graph, because the distribution of simulation costs z is not symmetric. The mean is correct, however.","category":"page"},{"location":"explanation/theory_intro/#Termination-criteria","page":"Introductory theory","title":"Termination criteria","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"In Kelley's algorithm, the upper bound was deterministic. Therefore, we could terminate the algorithm when the lower bound was sufficiently close to the upper bound. However, our upper bound for SDDP is not deterministic; it is a confidence interval!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Some people suggest terminating SDDP when the lower bound is contained within the confidence interval. However, this is a poor choice because it is too easy to generate a false positive. For example, if we use a small number of replications then the width of the confidence will be large, and we are more likely to terminate!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"In a future tutorial (not yet written...) we will discuss termination criteria in more depth. For now, pick a large number of iterations and train for as long as possible.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"tip: Tip\nFor a rule of thumb, pick a large number of iterations to train the policy for (e.g., 10 times mathcalN times maxlimits_iinmathcalN Omega_i)","category":"page"},{"location":"explanation/theory_intro/#Implementation:-the-training-loop","page":"Introductory theory","title":"Implementation: the training loop","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"The train loop of SDDP just applies the forward and backward passes iteratively, followed by a final simulation to compute the upper bound confidence interval:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function train(\n model::PolicyGraph;\n iteration_limit::Int,\n replications::Int,\n io::IO = stdout,\n)\n for i in 1:iteration_limit\n println(io, \"Starting iteration $(i)\")\n outgoing_states, _ = forward_pass(model, io)\n backward_pass(model, outgoing_states, io)\n println(io, \"| Finished iteration\")\n println(io, \"| | lower_bound = \", lower_bound(model))\n end\n println(io, \"Termination status: iteration limit\")\n μ, tσ = upper_bound(model; replications = replications)\n println(io, \"Upper bound = $(μ) ± $(tσ)\")\n return\nend","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Using our model we defined earlier, we can go:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"train(model; iteration_limit = 3, replications = 100)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Success! We trained a policy for a finite horizon multistage stochastic program using stochastic dual dynamic programming.","category":"page"},{"location":"explanation/theory_intro/#Implementation:-evaluating-the-policy","page":"Introductory theory","title":"Implementation: evaluating the policy","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"A final step is the ability to evaluate the policy at a given point.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"function evaluate_policy(\n model::PolicyGraph;\n node::Int,\n incoming_state::Dict{Symbol,Float64},\n random_variable,\n)\n the_node = model.nodes[node]\n the_node.uncertainty.parameterize(random_variable)\n for (k, v) in incoming_state\n JuMP.fix(the_node.states[k].in, v; force = true)\n end\n JuMP.optimize!(the_node.subproblem)\n return Dict(\n k => JuMP.value.(v) for\n (k, v) in JuMP.object_dictionary(the_node.subproblem)\n )\nend\n\nevaluate_policy(\n model;\n node = 1,\n incoming_state = Dict(:volume => 150.0),\n random_variable = 75,\n)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"note: Note\nThe random variable can be out-of-sample, i.e., it doesn't have to be in the vector Omega we created when defining the model! This is a notable difference to other multistage stochastic solution methods like progressive hedging or using the deterministic equivalent.","category":"page"},{"location":"explanation/theory_intro/#Example:-infinite-horizon","page":"Introductory theory","title":"Example: infinite horizon","text":"","category":"section"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"As promised earlier, our implementation is actually pretty general. It can solve any multistage stochastic (linear) program defined by a policy graph, including infinite horizon problems!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Here's an example, where we have extended our earlier problem with an arc from node 3 to node 2 with probability 0.5. You can interpret the 0.5 as a discount factor.","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"model = PolicyGraph(\n subproblem_builder;\n graph = [Dict(2 => 1.0), Dict(3 => 1.0), Dict(2 => 0.5)],\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Then, train a policy:","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"train(model; iteration_limit = 3, replications = 100)","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"Success! We trained a policy for an infinite horizon multistage stochastic program using stochastic dual dynamic programming. Note how some of the forward passes are different lengths!","category":"page"},{"location":"explanation/theory_intro/","page":"Introductory theory","title":"Introductory theory","text":"evaluate_policy(\n model;\n node = 3,\n incoming_state = Dict(:volume => 100.0),\n random_variable = 10.0,\n)","category":"page"},{"location":"examples/generation_expansion/","page":"Generation expansion","title":"Generation expansion","text":"EditURL = \"generation_expansion.jl\"","category":"page"},{"location":"examples/generation_expansion/#Generation-expansion","page":"Generation expansion","title":"Generation expansion","text":"","category":"section"},{"location":"examples/generation_expansion/","page":"Generation expansion","title":"Generation expansion","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/generation_expansion/","page":"Generation expansion","title":"Generation expansion","text":"using SDDP\nimport HiGHS\nimport Test\n\nfunction generation_expansion(duality_handler)\n build_cost = 1e4\n use_cost = 4\n num_units = 5\n capacities = ones(num_units)\n demand_vals =\n 0.5 * [\n 5 5 5 5 5 5 5 5\n 4 3 1 3 0 9 8 17\n 0 9 4 2 19 19 13 7\n 25 11 4 14 4 6 15 12\n 6 7 5 3 8 4 17 13\n ]\n # Cost of unmet demand\n penalty = 5e5\n # Discounting rate\n rho = 0.99\n model = SDDP.LinearPolicyGraph(;\n stages = 5,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, stage\n @variable(\n sp,\n 0 <= invested[1:num_units] <= 1,\n SDDP.State,\n Int,\n initial_value = 0\n )\n @variables(sp, begin\n generation >= 0\n unmet >= 0\n demand\n end)\n\n @constraints(\n sp,\n begin\n # Can't un-invest\n investment[i in 1:num_units], invested[i].out >= invested[i].in\n # Generation capacity\n sum(capacities[i] * invested[i].out for i in 1:num_units) >=\n generation\n # Meet demand or pay a penalty\n unmet >= demand - sum(generation)\n # For fewer iterations order the units to break symmetry, units are identical (tougher numerically)\n [j in 1:(num_units-1)], invested[j].out <= invested[j+1].out\n end\n )\n # Demand is uncertain\n SDDP.parameterize(ω -> JuMP.fix(demand, ω), sp, demand_vals[stage, :])\n\n @expression(\n sp,\n investment_cost,\n build_cost *\n sum(invested[i].out - invested[i].in for i in 1:num_units)\n )\n @stageobjective(\n sp,\n (investment_cost + generation * use_cost) * rho^(stage - 1) +\n penalty * unmet\n )\n end\n if get(ARGS, 1, \"\") == \"--write\"\n # Run `$ julia generation_expansion.jl --write` to update the benchmark\n # model directory\n model_dir = joinpath(@__DIR__, \"..\", \"..\", \"..\", \"benchmarks\", \"models\")\n SDDP.write_to_file(\n model,\n joinpath(model_dir, \"generation_expansion.sof.json.gz\");\n test_scenarios = 100,\n )\n exit(0)\n end\n SDDP.train(model; log_frequency = 10, duality_handler = duality_handler)\n Test.@test SDDP.calculate_bound(model) ≈ 2.078860e6 atol = 1e3\n return\nend\n\ngeneration_expansion(SDDP.ContinuousConicDuality())\ngeneration_expansion(SDDP.LagrangianDuality())","category":"page"},{"location":"examples/biobjective_hydro/","page":"Biobjective hydro-thermal","title":"Biobjective hydro-thermal","text":"EditURL = \"biobjective_hydro.jl\"","category":"page"},{"location":"examples/biobjective_hydro/#Biobjective-hydro-thermal","page":"Biobjective hydro-thermal","title":"Biobjective hydro-thermal","text":"","category":"section"},{"location":"examples/biobjective_hydro/","page":"Biobjective hydro-thermal","title":"Biobjective hydro-thermal","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/biobjective_hydro/","page":"Biobjective hydro-thermal","title":"Biobjective hydro-thermal","text":"using SDDP, HiGHS, Statistics, Test\n\nfunction biobjective_example()\n model = SDDP.LinearPolicyGraph(;\n stages = 3,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, _\n @variable(subproblem, 0 <= v <= 200, SDDP.State, initial_value = 50)\n @variables(subproblem, begin\n 0 <= g[i = 1:2] <= 100\n 0 <= u <= 150\n s >= 0\n shortage_cost >= 0\n end)\n @expressions(subproblem, begin\n objective_1, g[1] + 10 * g[2]\n objective_2, shortage_cost\n end)\n @constraints(subproblem, begin\n inflow_constraint, v.out == v.in - u - s\n g[1] + g[2] + u == 150\n shortage_cost >= 40 - v.out\n shortage_cost >= 60 - 2 * v.out\n shortage_cost >= 80 - 4 * v.out\n end)\n # You must call this for a biobjective problem!\n SDDP.initialize_biobjective_subproblem(subproblem)\n SDDP.parameterize(subproblem, 0.0:5:50.0) do ω\n JuMP.set_normalized_rhs(inflow_constraint, ω)\n # You must call `set_biobjective_functions` from within\n # `SDDP.parameterize`.\n return SDDP.set_biobjective_functions(\n subproblem,\n objective_1,\n objective_2,\n )\n end\n end\n pareto_weights =\n SDDP.train_biobjective(model; solution_limit = 10, iteration_limit = 10)\n solutions = [(k, v) for (k, v) in pareto_weights]\n sort!(solutions; by = x -> x[1])\n @test length(solutions) == 10\n # Test for convexity! The gradient must be decreasing as we move from left\n # to right.\n gradient(a, b) = (b[2] - a[2]) / (b[1] - a[1])\n grad = Inf\n for i in 1:9\n new_grad = gradient(solutions[i], solutions[i+1])\n @test new_grad < grad\n grad = new_grad\n end\n return\nend\n\nbiobjective_example()","category":"page"},{"location":"examples/asset_management_simple/","page":"Asset management","title":"Asset management","text":"EditURL = \"asset_management_simple.jl\"","category":"page"},{"location":"examples/asset_management_simple/#Asset-management","page":"Asset management","title":"Asset management","text":"","category":"section"},{"location":"examples/asset_management_simple/","page":"Asset management","title":"Asset management","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/asset_management_simple/","page":"Asset management","title":"Asset management","text":"Taken from the book J.R. Birge, F. Louveaux, Introduction to Stochastic Programming, Springer Series in Operations Research and Financial Engineering, Springer New York, New York, NY, 2011","category":"page"},{"location":"examples/asset_management_simple/","page":"Asset management","title":"Asset management","text":"using SDDP, HiGHS, Test\n\nfunction asset_management_simple()\n model = SDDP.PolicyGraph(\n SDDP.MarkovianGraph(\n Array{Float64,2}[\n [1.0]',\n [0.5 0.5],\n [0.5 0.5; 0.5 0.5],\n [0.5 0.5; 0.5 0.5],\n ],\n );\n lower_bound = -1_000.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, index\n (stage, markov_state) = index\n r_stock = [1.25, 1.06]\n r_bonds = [1.14, 1.12]\n @variable(subproblem, stocks >= 0, SDDP.State, initial_value = 0.0)\n @variable(subproblem, bonds >= 0, SDDP.State, initial_value = 0.0)\n if stage == 1\n @constraint(subproblem, stocks.out + bonds.out == 55)\n @stageobjective(subproblem, 0)\n elseif 1 < stage < 4\n @constraint(\n subproblem,\n r_stock[markov_state] * stocks.in +\n r_bonds[markov_state] * bonds.in == stocks.out + bonds.out\n )\n @stageobjective(subproblem, 0)\n else\n @variable(subproblem, over >= 0)\n @variable(subproblem, short >= 0)\n @constraint(\n subproblem,\n r_stock[markov_state] * stocks.in +\n r_bonds[markov_state] * bonds.in - over + short == 80\n )\n @stageobjective(subproblem, -over + 4 * short)\n end\n end\n SDDP.train(model; log_frequency = 5)\n @test SDDP.calculate_bound(model) ≈ 1.514 atol = 1e-4\n return\nend\n\nasset_management_simple()","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"EditURL = \"inventory.jl\"","category":"page"},{"location":"tutorial/inventory/#Example:-inventory-management","page":"Example: inventory management","title":"Example: inventory management","text":"","category":"section"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"The purpose of this tutorial is to demonstrate a well-known inventory management problem with a finite- and infinite-horizon policy.","category":"page"},{"location":"tutorial/inventory/#Required-packages","page":"Example: inventory management","title":"Required packages","text":"","category":"section"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"This tutorial requires the following packages:","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"using SDDP\nimport Distributions\nimport HiGHS\nimport Plots\nimport Statistics","category":"page"},{"location":"tutorial/inventory/#Background","page":"Example: inventory management","title":"Background","text":"","category":"section"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"Consider a periodic review inventory problem involving a single product. The initial inventory is denoted by x_0 geq 0, and a decision-maker can place an order at the start of each stage. The objective is to minimize expected costs over the planning horizon. The following parameters define the cost structure:","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"c is the unit cost for purchasing each unit\nh is the holding cost per unit remaining at the end of each stage\np is the shortage cost per unit of unsatisfied demand at the end of each stage","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"There are no fixed ordering costs, and the demand at each stage is assumed to follow an independent and identically distributed random variable with cumulative distribution function (CDF) Phi(cdot). Any unsatisfied demand is backlogged and carried forward to the next stage.","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"At each stage, an agent must decide how many items to order. The per-stage costs are the sum of the order costs, shortage and holding costs incurred at the end of the stage, after demand is realized.","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"Following Chapter 19 of Introduction to Operations Research by Hillier and Lieberman (7th edition), we use the following parameters: c=15 h=1 p=15.","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"x_0 = 10 # initial inventory\nc = 35 # unit inventory cost\nh = 1 # unit inventory holding cost\np = 15 # unit order cost","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"Demand follows a continuous uniform distribution between 0 and 800. We construct a sample average approximation with 20 scenarios:","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"Ω = range(0, 800; length = 20);\nnothing #hide","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"This is a well-known inventory problem with a closed-form solution. The optimal policy is a simple order-up-to policy: if the inventory level is below a certain number of units, the decision-maker orders up to that number of units. Otherwise, no order is placed. For a detailed analysis, refer to Foundations of Stochastic Inventory Theory by Evan Porteus (Stanford Business Books, 2002).","category":"page"},{"location":"tutorial/inventory/#Finite-horizon","page":"Example: inventory management","title":"Finite horizon","text":"","category":"section"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"For a finite horizon of length T, the problem is to minimize the total expected cost over all stages.","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"In the last stage, the decision-maker can recover the unit cost c for each leftover item, or buy out any remaining backlog, also at the unit cost c.","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"T = 10 # number of stages\nmodel = SDDP.LinearPolicyGraph(;\n stages = T + 1,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(sp, x_inventory >= 0, SDDP.State, initial_value = x_0)\n @variable(sp, x_demand >= 0, SDDP.State, initial_value = 0)\n # u_buy is a Decision-Hazard control variable. We decide u.out for use in\n # the next stage\n @variable(sp, u_buy >= 0, SDDP.State, initial_value = 0)\n @variable(sp, u_sell >= 0)\n @variable(sp, w_demand == 0)\n @constraint(sp, x_inventory.out == x_inventory.in + u_buy.in - u_sell)\n @constraint(sp, x_demand.out == x_demand.in + w_demand - u_sell)\n if t == 1\n fix(u_sell, 0; force = true)\n @stageobjective(sp, c * u_buy.out)\n elseif t == T + 1\n fix(u_buy.out, 0; force = true)\n @stageobjective(sp, -c * x_inventory.out + c * x_demand.out)\n SDDP.parameterize(ω -> JuMP.fix(w_demand, ω), sp, Ω)\n else\n @stageobjective(sp, c * u_buy.out + h * x_inventory.out + p * x_demand.out)\n SDDP.parameterize(ω -> JuMP.fix(w_demand, ω), sp, Ω)\n end\n return\nend","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"Train and simulate the policy:","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"SDDP.train(model)\nsimulations = SDDP.simulate(model, 200, [:x_inventory, :u_buy])\nobjective_values = [sum(t[:stage_objective] for t in s) for s in simulations]\nμ, ci = round.(SDDP.confidence_interval(objective_values, 1.96); digits = 2)\nlower_bound = round(SDDP.calculate_bound(model); digits = 2)\nprintln(\"Confidence interval: \", μ, \" ± \", ci)\nprintln(\"Lower bound: \", lower_bound)","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"Plot the optimal inventory levels:","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"plt = SDDP.publication_plot(\n simulations;\n title = \"x_inventory.out + u_buy.out\",\n xlabel = \"Stage\",\n ylabel = \"Quantity\",\n ylims = (0, 1_000),\n) do data\n return data[:x_inventory].out + data[:u_buy].out\nend","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"In the early stages, we indeed recover an order-up-to policy. However, there are end-of-horizon effects as the agent tries to optimize their decision making knowing that they have 10 realizations of demand.","category":"page"},{"location":"tutorial/inventory/#Infinite-horizon","page":"Example: inventory management","title":"Infinite horizon","text":"","category":"section"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"We can remove the end-of-horizonn effects by considering an infinite horizon model. We assume a discount factor alpha=095:","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"α = 0.95\ngraph = SDDP.LinearGraph(2)\nSDDP.add_edge(graph, 2 => 2, α)\ngraph","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"The objective in this case is to minimize the discounted expected costs over an infinite planning horizon.","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"model = SDDP.PolicyGraph(\n graph;\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(sp, x_inventory >= 0, SDDP.State, initial_value = x_0)\n @variable(sp, x_demand >= 0, SDDP.State, initial_value = 0)\n # u_buy is a Decision-Hazard control variable. We decide u.out for use in\n # the next stage\n @variable(sp, u_buy >= 0, SDDP.State, initial_value = 0)\n @variable(sp, u_sell >= 0)\n @variable(sp, w_demand == 0)\n @constraint(sp, x_inventory.out == x_inventory.in + u_buy.in - u_sell)\n @constraint(sp, x_demand.out == x_demand.in + w_demand - u_sell)\n if t == 1\n fix(u_sell, 0; force = true)\n @stageobjective(sp, c * u_buy.out)\n else\n @stageobjective(sp, c * u_buy.out + h * x_inventory.out + p * x_demand.out)\n SDDP.parameterize(ω -> JuMP.fix(w_demand, ω), sp, Ω)\n end\n return\nend\n\nSDDP.train(model; iteration_limit = 400)\nsimulations = SDDP.simulate(\n model,\n 200,\n [:x_inventory, :u_buy];\n sampling_scheme = SDDP.InSampleMonteCarlo(;\n max_depth = 50,\n terminate_on_dummy_leaf = false,\n ),\n);\nnothing #hide","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"Plot the optimal inventory levels:","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"plt = SDDP.publication_plot(\n simulations;\n title = \"x_inventory.out + u_buy.out\",\n xlabel = \"Stage\",\n ylabel = \"Quantity\",\n ylims = (0, 1_000),\n) do data\n return data[:x_inventory].out + data[:u_buy].out\nend\nPlots.hline!(plt, [662]; label = \"Analytic solution\")","category":"page"},{"location":"tutorial/inventory/","page":"Example: inventory management","title":"Example: inventory management","text":"We again recover an order-up-to policy. The analytic solution is to order-up-to 662 units. We do not precisely recover this solution because we used a sample average approximation of 20 elements. If we increased the number of samples, our solution would approach the analytic solution.","category":"page"},{"location":"guides/access_previous_variables/#Access-variables-from-a-previous-stage","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"","category":"section"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"A common question is \"how do I use a variable from a previous stage in a constraint?\"","category":"page"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"info: Info\nIf you want to use a variable from a previous stage, it must be a state variable.","category":"page"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"Here are some examples:","category":"page"},{"location":"guides/access_previous_variables/#Access-a-first-stage-decision-in-a-future-stage","page":"Access variables from a previous stage","title":"Access a first-stage decision in a future stage","text":"","category":"section"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"This is often useful if your first-stage decisions are capacity-expansion type decisions (e.g., you choose first how much capacity to add, but because it takes time to build, it only shows up in some future stage).","category":"page"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"using SDDP, HiGHS\nSDDP.LinearPolicyGraph(\n stages = 10,\n sense = :Max,\n upper_bound = 100.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n # Capacity of the generator. Decided in the first stage.\n @variable(sp, capacity >= 0, SDDP.State, initial_value = 0)\n # Quantity of water stored.\n @variable(sp, reservoir >= 0, SDDP.State, initial_value = 0)\n # Quantity of water to use for electricity generation in current stage.\n @variable(sp, generation >= 0)\n if t == 1\n # There are no constraints in the first stage, but we need to push the\n # initial value of the reservoir to the next stage.\n @constraint(sp, reservoir.out == reservoir.in)\n # Since we're maximizing profit, subtract cost of capacity.\n @stageobjective(sp, -capacity.out)\n else\n # Water balance constraint.\n @constraint(sp, balance, reservoir.out - reservoir.in + generation == 0)\n # Generation limit.\n @constraint(sp, generation <= capacity.in)\n # Push capacity to the next stage.\n @constraint(sp, capacity.out == capacity.in)\n # Maximize generation.\n @stageobjective(sp, generation)\n # Random inflow in balance constraint.\n SDDP.parameterize(sp, rand(4)) do w\n set_normalized_rhs(balance, w)\n end\n end\nend","category":"page"},{"location":"guides/access_previous_variables/#Access-a-decision-from-N-stages-ago","page":"Access variables from a previous stage","title":"Access a decision from N stages ago","text":"","category":"section"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"This is often useful if you have some inventory problem with a lead time on orders. In the code below, we assume that the product has a lead time of 5 stages, and we use a state variable to track the decisions on the production for the last 5 stages. The decisions are passed to the next stage by shifting them by one stage.","category":"page"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"using SDDP, HiGHS\nSDDP.LinearPolicyGraph(\n stages = 10,\n sense = :Max,\n upper_bound = 100,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n # Current inventory on hand.\n @variable(sp, inventory >= 0, SDDP.State, initial_value = 0)\n # Inventory pipeline.\n # pipeline[1].out are orders placed today.\n # pipeline[5].in are orders that arrive today and can be added to the\n # current inventory.\n # Stock moves up one slot in the pipeline each stage.\n @variable(sp, pipeline[1:5], SDDP.State, initial_value = 0)\n # The number of units to order today.\n @variable(sp, 0 <= buy <= 10)\n # The number of units to sell today.\n @variable(sp, sell >= 0)\n # Buy orders get placed in the pipeline.\n @constraint(sp, pipeline[1].out == buy)\n # Stock moves up one slot in the pipeline each stage.\n @constraint(sp, [i=2:5], pipeline[i].out == pipeline[i-1].in)\n # Stock balance constraint.\n @constraint(sp, inventory.out == inventory.in - sell + pipeline[5].in)\n # Maximize quantity of sold items.\n @stageobjective(sp, sell)\nend","category":"page"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"warning: Warning\nYou must initialize the same number of state variables in every stage, even if they are not used in that stage.","category":"page"},{"location":"guides/access_previous_variables/#Stochastic-lead-times","page":"Access variables from a previous stage","title":"Stochastic lead times","text":"","category":"section"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"Stochastic lead times can be modeled by adding stochasticity to the pipeline balance constraint.","category":"page"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"The trick is to use the random variable omega to represent the lead time, together with JuMP.set_normalized_coefficient to add u_buy to the i pipeline balance constraint when omega is equal to i. For example, if omega = 2 and T = 4, we would have constraints:","category":"page"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"c_pipeline[1], x_pipeline[1].out == x_pipeline[2].in + 0 * u_buy\nc_pipeline[2], x_pipeline[2].out == x_pipeline[3].in + 1 * u_buy\nc_pipeline[3], x_pipeline[3].out == x_pipeline[4].in + 0 * u_buy\nc_pipeline[4], x_pipeline[4].out == x_pipeline[5].in + 0 * u_buy","category":"page"},{"location":"guides/access_previous_variables/","page":"Access variables from a previous stage","title":"Access variables from a previous stage","text":"using SDDP\nimport HiGHS\nT = 10\nmodel = SDDP.LinearPolicyGraph(\n stages = 20,\n sense = :Max,\n upper_bound = 1000,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variables(sp, begin\n x_inventory >= 0, SDDP.State, (initial_value = 0)\n x_pipeline[1:T+1], SDDP.State, (initial_value = 0)\n 0 <= u_buy <= 10\n u_sell >= 0\n end)\n fix(x_pipeline[T+1].out, 0)\n @stageobjective(sp, u_sell)\n @constraints(sp, begin\n # Shift the orders one stage \n c_pipeline[i=1:T], x_pipeline[i].out == x_pipeline[i+1].in + 1 * u_buy\n # x_pipeline[1].in are arriving on the inventory\n x_inventory.out == x_inventory.in - u_sell + x_pipeline[1].in\n end)\n SDDP.parameterize(sp, 1:T) do ω\n # Rewrite the constraint c_pipeline[i=1:T] indicating how many stages\n # ahead the order will arrive (ω)\n # if ω == i:\n # x_pipeline[i+1].in + 1 * u_buy == x_pipeline[i].out\n # else:\n # x_pipeline[i+1].in + 0 * u_buy == x_pipeline[i].out\n for i in 1:T\n set_normalized_coefficient(c_pipeline[i], u_buy, ω == i ? 1 : 0)\n end\n end\nend","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"DocTestSetup = quote\n using SDDP\nend","category":"page"},{"location":"guides/create_a_belief_state/#Create-a-belief-state","page":"Create a belief state","title":"Create a belief state","text":"","category":"section"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"SDDP.jl includes an implementation of the algorithm described in Dowson, O., Morton, D.P., & Pagnoncelli, B.K. (2020). Partially observable multistage stochastic optimization. Operations Research Letters, 48(4), 505–512.","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"Given a SDDP.Graph object (see Create a general policy graph for details), we can define the ambiguity partition using SDDP.add_ambiguity_set.","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"For example, first we create a Markovian graph:","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"using SDDP\nG = SDDP.MarkovianGraph([[0.5 0.5], [0.2 0.8; 0.8 0.2]])","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"Then we add an ambiguity set over the nodes in the each stage:","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"for t in 1:2\n SDDP.add_ambiguity_set(G, [(t, 1), (t, 2)])\nend","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"This results in the graph:","category":"page"},{"location":"guides/create_a_belief_state/","page":"Create a belief state","title":"Create a belief state","text":"G","category":"page"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"CurrentModule = SDDP","category":"page"},{"location":"release_notes/#Release-notes","page":"Release notes","title":"Release notes","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.","category":"page"},{"location":"release_notes/#[v1.9.0](https://github.com/odow/SDDP.jl/releases/tag/v1.9.0)-(October-17,-2024)","page":"Release notes","title":"v1.9.0 (October 17, 2024)","text":"","category":"section"},{"location":"release_notes/#Added","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added write_only_selected_cuts and cut_selection keyword arguments to write_cuts_to_file and read_cuts_from_file to skip potentially expensive operations (#781) (#784)\nAdded set_numerical_difficulty_callback to modify the subproblem on numerical difficulty (#790)","category":"page"},{"location":"release_notes/#Fixed","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed the tests to skip threading tests if running in serial (#770)\nFixed BanditDuality to handle the case where the standard deviation is NaN (#779)\nFixed an error when lagged state variables are encountered in MSPFormat (#786)\nFixed publication_plot with replications of different lengths (#788)\nFixed CTRL+C interrupting the code at unsafe points (#789)","category":"page"},{"location":"release_notes/#Other","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#771) (#772)\nUpdated printing because of changes in JuMP (#773)","category":"page"},{"location":"release_notes/#[v1.8.1](https://github.com/odow/SDDP.jl/releases/tag/v1.8.1)-(August-5,-2024)","page":"Release notes","title":"v1.8.1 (August 5, 2024)","text":"","category":"section"},{"location":"release_notes/#Fixed-2","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed various issues with SDDP.Threaded() (#761)\nFixed a deprecation warning for sorting a dictionary (#763)","category":"page"},{"location":"release_notes/#Other-2","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated copyright notices (#762)\nUpdated .JuliaFormatter.toml (#764)","category":"page"},{"location":"release_notes/#[v1.8.0](https://github.com/odow/SDDP.jl/releases/tag/v1.8.0)-(July-24,-2024)","page":"Release notes","title":"v1.8.0 (July 24, 2024)","text":"","category":"section"},{"location":"release_notes/#Added-2","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.Threaded(), which is an experimental parallel scheme that supports solving problems using multiple threads. Some parts of SDDP.jl may not be thread-safe, and this can cause incorrect results, segfaults, or other errors. Please use with care and report any issues by opening a GitHub issue. (#758)","category":"page"},{"location":"release_notes/#Other-3","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements and fixes (#747) (#759)","category":"page"},{"location":"release_notes/#[v1.7.0](https://github.com/odow/SDDP.jl/releases/tag/v1.7.0)-(June-4,-2024)","page":"Release notes","title":"v1.7.0 (June 4, 2024)","text":"","category":"section"},{"location":"release_notes/#Added-3","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added sample_backward_noise_terms_with_state for creating backward pass sampling schemes that depend on the current primal state. (#742) (Thanks @arthur-brigatto)","category":"page"},{"location":"release_notes/#Fixed-3","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed error message when publication_plot has non-finite data (#738)","category":"page"},{"location":"release_notes/#Other-4","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated the logo constructor (#730)","category":"page"},{"location":"release_notes/#[v1.6.7](https://github.com/odow/SDDP.jl/releases/tag/v1.6.7)-(February-1,-2024)","page":"Release notes","title":"v1.6.7 (February 1, 2024)","text":"","category":"section"},{"location":"release_notes/#Fixed-4","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed non-constant state dimension in the MSPFormat reader (#695)\nFixed SimulatorSamplingScheme for deterministic nodes (#710)\nFixed line search in BFGS (#711)\nFixed handling of NEARLY_FEASIBLE_POINT status (#726)","category":"page"},{"location":"release_notes/#Other-5","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#692) (#694) (#706) (#716) (#727)\nUpdated to StochOptFormat v1.0 (#705)\nAdded an experimental OuterApproximation algorithm (#709)\nUpdated .gitignore (#717)\nAdded code for MDP paper (#720) (#721)\nAdded Google analytics (#723)","category":"page"},{"location":"release_notes/#[v1.6.6](https://github.com/odow/SDDP.jl/releases/tag/v1.6.6)-(September-29,-2023)","page":"Release notes","title":"v1.6.6 (September 29, 2023)","text":"","category":"section"},{"location":"release_notes/#Other-6","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated Example: two-stage newsvendor tutorial (#689)\nAdded a warning for people using SDDP.Statistical (#687)","category":"page"},{"location":"release_notes/#[v1.6.5](https://github.com/odow/SDDP.jl/releases/tag/v1.6.5)-(September-25,-2023)","page":"Release notes","title":"v1.6.5 (September 25, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-5","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed duplicate nodes in MarkovianGraph (#681)","category":"page"},{"location":"release_notes/#Other-7","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated tutorials (#677) (#678) (#682) (#683)\nFixed documentation preview (#679)","category":"page"},{"location":"release_notes/#[v1.6.4](https://github.com/odow/SDDP.jl/releases/tag/v1.6.4)-(September-23,-2023)","page":"Release notes","title":"v1.6.4 (September 23, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-6","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed error for invalid log_frequency values (#665)\nFixed objective sense in deterministic_equivalent (#673)","category":"page"},{"location":"release_notes/#Other-8","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation updates (#658) (#666) (#671)\nSwitch to GitHub action for deploying docs (#668) (#670)\nUpdate to Documenter@1 (#669)","category":"page"},{"location":"release_notes/#[v1.6.3](https://github.com/odow/SDDP.jl/releases/tag/v1.6.3)-(September-8,-2023)","page":"Release notes","title":"v1.6.3 (September 8, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-7","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed default stopping rule with iteration_limit or time_limit set (#662)","category":"page"},{"location":"release_notes/#Other-9","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Various documentation improvements (#651) (#657) (#659) (#660)","category":"page"},{"location":"release_notes/#[v1.6.2](https://github.com/odow/SDDP.jl/releases/tag/v1.6.2)-(August-24,-2023)","page":"Release notes","title":"v1.6.2 (August 24, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-8","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"MSPFormat now detect and exploit stagewise independent lattices (#653)\nFixed set_optimizer for models read from file (#654)","category":"page"},{"location":"release_notes/#Other-10","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed typo in pglib_opf.jl (#647)\nFixed documentation build and added color (#652)","category":"page"},{"location":"release_notes/#[v1.6.1](https://github.com/odow/SDDP.jl/releases/tag/v1.6.1)-(July-20,-2023)","page":"Release notes","title":"v1.6.1 (July 20, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-9","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed bugs in MSPFormat reader (#638) (#639)","category":"page"},{"location":"release_notes/#Other-11","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Clarified OutOfSampleMonteCarlo docstring (#643)","category":"page"},{"location":"release_notes/#[v1.6.0](https://github.com/odow/SDDP.jl/releases/tag/v1.6.0)-(July-3,-2023)","page":"Release notes","title":"v1.6.0 (July 3, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-4","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added RegularizedForwardPass (#624)\nAdded FirstStageStoppingRule (#634)","category":"page"},{"location":"release_notes/#Other-12","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Removed an unbound type parameter (#632)\nFixed typo in docstring (#633)\nAdded Here-and-now and hazard-decision tutorial (#635)","category":"page"},{"location":"release_notes/#[v1.5.1](https://github.com/odow/SDDP.jl/releases/tag/v1.5.1)-(June-30,-2023)","page":"Release notes","title":"v1.5.1 (June 30, 2023)","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"This release contains a number of minor code changes, but it has a large impact on the content that is printed to screen. In particular, we now log periodically, instead of each iteration, and a \"good\" stopping rule is used as the default if none are specified. Try using SDDP.train(model) to see the difference.","category":"page"},{"location":"release_notes/#Other-13","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed various typos in the documentation (#617)\nFixed printing test after changes in JuMP (#618)\nSet SimulationStoppingRule as the default stopping rule (#619)\nChanged the default logging frequency. Pass log_every_seconds = 0.0 to train to revert to the old behavior. (#620)\nAdded example usage with Distributions.jl (@slwu89) (#622)\nRemoved the numerical issue @warn (#627)\nImproved the quality of docstrings (#630)","category":"page"},{"location":"release_notes/#[v1.5.0](https://github.com/odow/SDDP.jl/releases/tag/v1.5.0)-(May-14,-2023)","page":"Release notes","title":"v1.5.0 (May 14, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-5","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added the ability to use a different model for the forward pass. This is a novel feature that lets you train better policies when the model is non-convex or does not have a well-defined dual. See the Alternative forward models tutorial in which we train convex and non-convex formulations of the optimal power flow problem. (#611)","category":"page"},{"location":"release_notes/#Other-14","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated missing changelog entries (#608)\nRemoved global variables (#610)\nConverted the Options struct to keyword arguments. This struct was a private implementation detail, but the change is breaking if you developed an extension to SDDP that touched these internals. (#612)\nFixed some typos (#613)","category":"page"},{"location":"release_notes/#[v1.4.0](https://github.com/odow/SDDP.jl/releases/tag/v1.4.0)-(May-8,-2023)","page":"Release notes","title":"v1.4.0 (May 8, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-6","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.SimulationStoppingRule (#598)\nAdded sampling_scheme argument to SDDP.write_to_file (#607)","category":"page"},{"location":"release_notes/#Fixed-10","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed parsing of some MSPFormat files (#602) (#604)\nFixed printing in header (#605)","category":"page"},{"location":"release_notes/#[v1.3.0](https://github.com/odow/SDDP.jl/releases/tag/v1.3.0)-(May-3,-2023)","page":"Release notes","title":"v1.3.0 (May 3, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-7","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added experimental support for SDDP.MSPFormat.read_from_file (#593)","category":"page"},{"location":"release_notes/#Other-15","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated to StochOptFormat v0.3 (#600)","category":"page"},{"location":"release_notes/#[v1.2.1](https://github.com/odow/SDDP.jl/releases/tag/v1.2.1)-(May-1,-2023)","page":"Release notes","title":"v1.2.1 (May 1, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-11","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed log_every_seconds (#597)","category":"page"},{"location":"release_notes/#[v1.2.0](https://github.com/odow/SDDP.jl/releases/tag/v1.2.0)-(May-1,-2023)","page":"Release notes","title":"v1.2.0 (May 1, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-8","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.SimulatorSamplingScheme (#594)\nAdded log_every_seconds argument to SDDP.train (#595)","category":"page"},{"location":"release_notes/#Other-16","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Tweaked how the log is printed (#588)\nUpdated to StochOptFormat v0.2 (#592)","category":"page"},{"location":"release_notes/#[v1.1.4](https://github.com/odow/SDDP.jl/releases/tag/v1.1.4)-(April-10,-2023)","page":"Release notes","title":"v1.1.4 (April 10, 2023)","text":"","category":"section"},{"location":"release_notes/#Fixed-12","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Logs are now flushed every iteration (#584)","category":"page"},{"location":"release_notes/#Other-17","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added docstrings to various functions (#581)\nMinor documentation updates (#580)\nClarified integrality documentation (#582)\nUpdated the README (#585)\nNumber of numerical issues is now printed to the log (#586)","category":"page"},{"location":"release_notes/#[v1.1.3](https://github.com/odow/SDDP.jl/releases/tag/v1.1.3)-(April-2,-2023)","page":"Release notes","title":"v1.1.3 (April 2, 2023)","text":"","category":"section"},{"location":"release_notes/#Other-18","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed typo in Example: deterministic to stochastic tutorial (#578)\nFixed typo in documentation of SDDP.simulate (#577)","category":"page"},{"location":"release_notes/#[v1.1.2](https://github.com/odow/SDDP.jl/releases/tag/v1.1.2)-(March-18,-2023)","page":"Release notes","title":"v1.1.2 (March 18, 2023)","text":"","category":"section"},{"location":"release_notes/#Other-19","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added Example: deterministic to stochastic tutorial (#572)","category":"page"},{"location":"release_notes/#[v1.1.1](https://github.com/odow/SDDP.jl/releases/tag/v1.1.1)-(March-16,-2023)","page":"Release notes","title":"v1.1.1 (March 16, 2023)","text":"","category":"section"},{"location":"release_notes/#Other-20","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed email in Project.toml\nAdded notebook to documentation tutorials (#571)","category":"page"},{"location":"release_notes/#[v1.1.0](https://github.com/odow/SDDP.jl/releases/tag/v1.1.0)-(January-12,-2023)","page":"Release notes","title":"v1.1.0 (January 12, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-9","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added the node_name_parser argument to SDDP.write_cuts_to_file and added the option to skip nodes in SDDP.read_cuts_from_file (#565)","category":"page"},{"location":"release_notes/#[v1.0.0](https://github.com/odow/SDDP.jl/releases/tag/v1.0.0)-(January-3,-2023)","page":"Release notes","title":"v1.0.0 (January 3, 2023)","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Although we're bumping MAJOR version, this is a non-breaking release. Going forward:","category":"page"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"New features will bump the MINOR version\nBug fixes, maintenance, and documentation updates will bump the PATCH version\nWe will support only the Long Term Support (currently v1.6.7) and the latest patch (currently v1.8.4) releases of Julia. Updates to the LTS version will bump the MINOR version\nUpdates to the compat bounds of package dependencies will bump the PATCH version.","category":"page"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"We do not intend any breaking changes to the public API, which would require a new MAJOR release. The public API is everything defined in the documentation. Anything not in the documentation is considered private and may change in any PATCH release.","category":"page"},{"location":"release_notes/#Added-10","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added num_nodes argument to SDDP.UnicyclicGraph (#562)\nAdded support for passing an optimizer to SDDP.Asynchronous (#545)","category":"page"},{"location":"release_notes/#Other-21","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated Plotting tools to use live plots (#563)\nAdded vale as a linter (#565)\nImproved documentation for initializing a parallel scheme (#566)","category":"page"},{"location":"release_notes/#[v0.4.9](https://github.com/odow/SDDP.jl/releases/tag/v0.4.9)-(January-3,-2023)","page":"Release notes","title":"v0.4.9 (January 3, 2023)","text":"","category":"section"},{"location":"release_notes/#Added-11","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.UnicyclicGraph (#556)","category":"page"},{"location":"release_notes/#Other-22","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added tutorial on Markov Decision Processes (#556)\nAdded two-stage newsvendor tutorial (#557)\nRefactored the layout of the documentation (#554) (#555)\nUpdated copyright to 2023 (#558)\nFixed errors in the documentation (#561)","category":"page"},{"location":"release_notes/#[v0.4.8](https://github.com/odow/SDDP.jl/releases/tag/v0.4.8)-(December-19,-2022)","page":"Release notes","title":"v0.4.8 (December 19, 2022)","text":"","category":"section"},{"location":"release_notes/#Added-12","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added terminate_on_cycle option to SDDP.Historical (#549)\nAdded include_last_node option to SDDP.DefaultForwardPass (#547)","category":"page"},{"location":"release_notes/#Fixed-13","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Reverted then fixed (#531) because it failed to account for problems with integer variables (#546) (#551)","category":"page"},{"location":"release_notes/#[v0.4.7](https://github.com/odow/SDDP.jl/releases/tag/v0.4.7)-(December-17,-2022)","page":"Release notes","title":"v0.4.7 (December 17, 2022)","text":"","category":"section"},{"location":"release_notes/#Added-13","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added initial_node support to InSampleMonteCarlo and OutOfSampleMonteCarlo (#535)","category":"page"},{"location":"release_notes/#Fixed-14","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Rethrow InterruptException when solver is interrupted (#534)\nFixed numerical recovery when we need dual solutions (#531) (Thanks @bfpc)\nFixed re-using the dashboard = true option between solves (#538)\nFixed bug when no @stageobjective is set (now defaults to 0.0) (#539)\nFixed errors thrown when invalid inputs are provided to add_objective_state (#540)","category":"page"},{"location":"release_notes/#Other-23","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Drop support for Julia versions prior to 1.6 (#533)\nUpdated versions of dependencies (#522) (#533)\nSwitched to HiGHS in the documentation and tests (#533)\nAdded license headers (#519)\nFixed link in air conditioning example (#521) (Thanks @conema)\nClarified variable naming in deterministic equivalent (#525) (Thanks @lucasprocessi)\nAdded this change log (#536)\nCuts are now written to model.cuts.json when numerical instability is discovered. This can aid debugging because it allows to you reload the cuts as of the iteration that caused the numerical issue (#537)","category":"page"},{"location":"release_notes/#[v0.4.6](https://github.com/odow/SDDP.jl/releases/tag/v0.4.6)-(March-25,-2022)","page":"Release notes","title":"v0.4.6 (March 25, 2022)","text":"","category":"section"},{"location":"release_notes/#Other-24","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Updated to JuMP v1.0 (#517)","category":"page"},{"location":"release_notes/#[v0.4.5](https://github.com/odow/SDDP.jl/releases/tag/v0.4.5)-(March-9,-2022)","page":"Release notes","title":"v0.4.5 (March 9, 2022)","text":"","category":"section"},{"location":"release_notes/#Fixed-15","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed issue with set_silent in a subproblem (#510)","category":"page"},{"location":"release_notes/#Other-25","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed many typos (#500) (#501) (#506) (#511) (Thanks @bfpc)\nUpdate to JuMP v0.23 (#514)\nAdded auto-regressive tutorial (#507)","category":"page"},{"location":"release_notes/#[v0.4.4](https://github.com/odow/SDDP.jl/releases/tag/v0.4.4)-(December-11,-2021)","page":"Release notes","title":"v0.4.4 (December 11, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-14","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added BanditDuality (#471)\nAdded benchmark scripts (#475) (#476) (#490)\nwrite_cuts_to_file now saves visited states (#468)","category":"page"},{"location":"release_notes/#Fixed-16","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed BoundStalling in a deterministic policy (#470) (#474)\nFixed magnitude warning with zero coefficients (#483)","category":"page"},{"location":"release_notes/#Other-26","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Improvements to LagrangianDuality (#481) (#482) (#487)\nImprovements to StrengthenedConicDuality (#486)\nSwitch to functional form for the tests (#478)\nFixed typos (#472) (Thanks @vfdev-5)\nUpdate to JuMP v0.22 (#498)","category":"page"},{"location":"release_notes/#[v0.4.3](https://github.com/odow/SDDP.jl/releases/tag/v0.4.3)-(August-31,-2021)","page":"Release notes","title":"v0.4.3 (August 31, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-15","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added biobjective solver (#462)\nAdded forward_pass_callback (#466)","category":"page"},{"location":"release_notes/#Other-27","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Update tutorials and documentation (#459) (#465)\nOrganize how paper materials are stored (#464)","category":"page"},{"location":"release_notes/#[v0.4.2](https://github.com/odow/SDDP.jl/releases/tag/v0.4.2)-(August-24,-2021)","page":"Release notes","title":"v0.4.2 (August 24, 2021)","text":"","category":"section"},{"location":"release_notes/#Fixed-17","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed a bug in Lagrangian duality (#457)","category":"page"},{"location":"release_notes/#[v0.4.1](https://github.com/odow/SDDP.jl/releases/tag/v0.4.1)-(August-23,-2021)","page":"Release notes","title":"v0.4.1 (August 23, 2021)","text":"","category":"section"},{"location":"release_notes/#Other-28","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Minor changes to our implementation of LagrangianDuality (#454) (#455)","category":"page"},{"location":"release_notes/#[v0.4.0](https://github.com/odow/SDDP.jl/releases/tag/v0.4.0)-(August-17,-2021)","page":"Release notes","title":"v0.4.0 (August 17, 2021)","text":"","category":"section"},{"location":"release_notes/#Breaking","page":"Release notes","title":"Breaking","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"A large refactoring for how we handle stochastic integer programs. This added support for things like SDDP.ContinuousConicDuality and SDDP.LagrangianDuality. It was breaking because we removed the integrality_handler argument to PolicyGraph. (#449) (#453)","category":"page"},{"location":"release_notes/#Other-29","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#447) (#448) (#450)","category":"page"},{"location":"release_notes/#[v0.3.17](https://github.com/odow/SDDP.jl/releases/tag/v0.3.17)-(July-6,-2021)","page":"Release notes","title":"v0.3.17 (July 6, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-16","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.PSRSamplingScheme (#426)","category":"page"},{"location":"release_notes/#Other-30","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Display more model attributes (#438)\nDocumentation improvements (#433) (#437) (#439)","category":"page"},{"location":"release_notes/#[v0.3.16](https://github.com/odow/SDDP.jl/releases/tag/v0.3.16)-(June-17,-2021)","page":"Release notes","title":"v0.3.16 (June 17, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-17","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.RiskAdjustedForwardPass (#413)\nAllow SDDP.Historical to sample sequentially (#420)","category":"page"},{"location":"release_notes/#Other-31","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Update risk measure docstrings (#418)","category":"page"},{"location":"release_notes/#[v0.3.15](https://github.com/odow/SDDP.jl/releases/tag/v0.3.15)-(June-1,-2021)","page":"Release notes","title":"v0.3.15 (June 1, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-18","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added SDDP.StoppingChain","category":"page"},{"location":"release_notes/#Fixed-18","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed scoping bug in SDDP.@stageobjective (#407)\nFixed a bug when the initial point is infeasible (#411)\nSet subproblems to silent by default (#409)","category":"page"},{"location":"release_notes/#Other-32","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Add JuliaFormatter (#412)\nDocumentation improvements (#406) (#408)","category":"page"},{"location":"release_notes/#[v0.3.14](https://github.com/odow/SDDP.jl/releases/tag/v0.3.14)-(March-30,-2021)","page":"Release notes","title":"v0.3.14 (March 30, 2021)","text":"","category":"section"},{"location":"release_notes/#Fixed-19","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed O(N^2) behavior in get_same_children (#393)","category":"page"},{"location":"release_notes/#[v0.3.13](https://github.com/odow/SDDP.jl/releases/tag/v0.3.13)-(March-27,-2021)","page":"Release notes","title":"v0.3.13 (March 27, 2021)","text":"","category":"section"},{"location":"release_notes/#Fixed-20","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed bug in print.jl\nFixed compat of Reexport (#388)","category":"page"},{"location":"release_notes/#[v0.3.12](https://github.com/odow/SDDP.jl/releases/tag/v0.3.12)-(March-22,-2021)","page":"Release notes","title":"v0.3.12 (March 22, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-19","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added problem statistics to header (#385) (#386)","category":"page"},{"location":"release_notes/#Fixed-21","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed subtypes in visualization (#384)","category":"page"},{"location":"release_notes/#[v0.3.11](https://github.com/odow/SDDP.jl/releases/tag/v0.3.11)-(March-22,-2021)","page":"Release notes","title":"v0.3.11 (March 22, 2021)","text":"","category":"section"},{"location":"release_notes/#Fixed-22","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed constructor in direct mode (#383)","category":"page"},{"location":"release_notes/#Other-33","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fix documentation (#379)","category":"page"},{"location":"release_notes/#[v0.3.10](https://github.com/odow/SDDP.jl/releases/tag/v0.3.10)-(February-23,-2021)","page":"Release notes","title":"v0.3.10 (February 23, 2021)","text":"","category":"section"},{"location":"release_notes/#Fixed-23","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed seriescolor in publication plot (#376)","category":"page"},{"location":"release_notes/#[v0.3.9](https://github.com/odow/SDDP.jl/releases/tag/v0.3.9)-(February-20,-2021)","page":"Release notes","title":"v0.3.9 (February 20, 2021)","text":"","category":"section"},{"location":"release_notes/#Added-20","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Add option to simulate with different incoming state (#372)\nAdded warning for cuts with high dynamic range (#373)","category":"page"},{"location":"release_notes/#Fixed-24","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed seriesalpha in publication plot (#375)","category":"page"},{"location":"release_notes/#[v0.3.8](https://github.com/odow/SDDP.jl/releases/tag/v0.3.8)-(January-19,-2021)","page":"Release notes","title":"v0.3.8 (January 19, 2021)","text":"","category":"section"},{"location":"release_notes/#Other-34","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#367) (#369) (#370)","category":"page"},{"location":"release_notes/#[v0.3.7](https://github.com/odow/SDDP.jl/releases/tag/v0.3.7)-(January-8,-2021)","page":"Release notes","title":"v0.3.7 (January 8, 2021)","text":"","category":"section"},{"location":"release_notes/#Other-35","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#362) (#363) (#365) (#366)\nBump copyright (#364)","category":"page"},{"location":"release_notes/#[v0.3.6](https://github.com/odow/SDDP.jl/releases/tag/v0.3.6)-(December-17,-2020)","page":"Release notes","title":"v0.3.6 (December 17, 2020)","text":"","category":"section"},{"location":"release_notes/#Other-36","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fix typos (#358)\nCollapse navigation bar in docs (#359)\nUpdate TagBot.yml (#361)","category":"page"},{"location":"release_notes/#[v0.3.5](https://github.com/odow/SDDP.jl/releases/tag/v0.3.5)-(November-18,-2020)","page":"Release notes","title":"v0.3.5 (November 18, 2020)","text":"","category":"section"},{"location":"release_notes/#Other-37","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Update citations (#348)\nSwitch to GitHub actions (#355)","category":"page"},{"location":"release_notes/#[v0.3.4](https://github.com/odow/SDDP.jl/releases/tag/v0.3.4)-(August-25,-2020)","page":"Release notes","title":"v0.3.4 (August 25, 2020)","text":"","category":"section"},{"location":"release_notes/#Added-21","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added non-uniform distributionally robust risk measure (#328)\nAdded numerical recovery functions (#330)\nAdded experimental StochOptFormat (#332) (#336) (#337) (#341) (#343) (#344)\nAdded entropic risk measure (#347)","category":"page"},{"location":"release_notes/#Other-38","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Documentation improvements (#327) (#333) (#339) (#340)","category":"page"},{"location":"release_notes/#[v0.3.3](https://github.com/odow/SDDP.jl/releases/tag/v0.3.3)-(June-19,-2020)","page":"Release notes","title":"v0.3.3 (June 19, 2020)","text":"","category":"section"},{"location":"release_notes/#Added-22","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added asynchronous support for price and belief states (#325)\nAdded ForwardPass plug-in system (#320)","category":"page"},{"location":"release_notes/#Fixed-25","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fix check for probabilities in Markovian graph (#322)","category":"page"},{"location":"release_notes/#[v0.3.2](https://github.com/odow/SDDP.jl/releases/tag/v0.3.2)-(April-6,-2020)","page":"Release notes","title":"v0.3.2 (April 6, 2020)","text":"","category":"section"},{"location":"release_notes/#Added-23","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added log_frequency argument to SDDP.train (#307)","category":"page"},{"location":"release_notes/#Other-39","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Improve error message in deterministic equivalent (#312)\nUpdate to RecipesBase 1.0 (#313)","category":"page"},{"location":"release_notes/#[v0.3.1](https://github.com/odow/SDDP.jl/releases/tag/v0.3.1)-(February-26,-2020)","page":"Release notes","title":"v0.3.1 (February 26, 2020)","text":"","category":"section"},{"location":"release_notes/#Fixed-26","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed filename in integrality_handlers.jl (#304)","category":"page"},{"location":"release_notes/#[v0.3.0](https://github.com/odow/SDDP.jl/releases/tag/v0.3.0)-(February-20,-2020)","page":"Release notes","title":"v0.3.0 (February 20, 2020)","text":"","category":"section"},{"location":"release_notes/#Breaking-2","page":"Release notes","title":"Breaking","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Breaking changes to update to JuMP v0.21 (#300).","category":"page"},{"location":"release_notes/#[v0.2.4](https://github.com/odow/SDDP.jl/releases/tag/v0.2.4)-(February-7,-2020)","page":"Release notes","title":"v0.2.4 (February 7, 2020)","text":"","category":"section"},{"location":"release_notes/#Added-24","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added a counter for the number of total subproblem solves (#301)","category":"page"},{"location":"release_notes/#Other-40","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Update formatter (#298)\nAdded tests (#299)","category":"page"},{"location":"release_notes/#[v0.2.3](https://github.com/odow/SDDP.jl/releases/tag/v0.2.3)-(January-24,-2020)","page":"Release notes","title":"v0.2.3 (January 24, 2020)","text":"","category":"section"},{"location":"release_notes/#Added-25","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added support for convex risk measures (#294)","category":"page"},{"location":"release_notes/#Fixed-27","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed bug when subproblem is infeasible (#296)\nFixed bug in deterministic equivalent (#297)","category":"page"},{"location":"release_notes/#Other-41","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added example from IJOC paper (#293)","category":"page"},{"location":"release_notes/#[v0.2.2](https://github.com/odow/SDDP.jl/releases/tag/v0.2.2)-(January-10,-2020)","page":"Release notes","title":"v0.2.2 (January 10, 2020)","text":"","category":"section"},{"location":"release_notes/#Fixed-28","page":"Release notes","title":"Fixed","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Fixed flakey time limit in tests (#291)","category":"page"},{"location":"release_notes/#Other-42","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Removed MathOptFormat.jl (#289)\nUpdate copyright (#290)","category":"page"},{"location":"release_notes/#[v0.2.1](https://github.com/odow/SDDP.jl/releases/tag/v0.2.1)-(December-19,-2019)","page":"Release notes","title":"v0.2.1 (December 19, 2019)","text":"","category":"section"},{"location":"release_notes/#Added-26","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added support for approximating a Markov lattice (#282) (#285)\nAdd tools for visualizing the value function (#272) (#286)\nWrite .mof.json files on error (#284)","category":"page"},{"location":"release_notes/#Other-43","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Improve documentation (#281) (#283)\nUpdate tests for Julia 1.3 (#287)","category":"page"},{"location":"release_notes/#[v0.2.0](https://github.com/odow/SDDP.jl/releases/tag/v0.2.0)-(December-16,-2019)","page":"Release notes","title":"v0.2.0 (December 16, 2019)","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"This version added the asynchronous parallel implementation with a few minor breaking changes in how we iterated internally. It didn't break basic user-facing models, only implementations that implemented some of the extension features. It probably could have been a v1.1 release.","category":"page"},{"location":"release_notes/#Added-27","page":"Release notes","title":"Added","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Added asynchronous parallel implementation (#277)\nAdded roll-out algorithm for cyclic graphs (#279)","category":"page"},{"location":"release_notes/#Other-44","page":"Release notes","title":"Other","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Improved error messages in PolicyGraph (#271)\nAdded JuliaFormatter (#273) (#276)\nFixed compat bounds (#274) (#278)\nAdded documentation for simulating non-standard graphs (#280)","category":"page"},{"location":"release_notes/#[v0.1.0](https://github.com/odow/SDDP.jl/releases/tag/v0.1.0)-(October-17,-2019)","page":"Release notes","title":"v0.1.0 (October 17, 2019)","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"A complete rewrite of SDDP.jl based on the policy graph framework. This was essentially a new package. It has minimal code in common with the previous implementation.","category":"page"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Development started on September 28, 2018 in Kokako.jl, and the code was merged into SDDP.jl on March 14, 2019.","category":"page"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"The pull request SDDP.jl#180 lists the 29 issues that the rewrite closed.","category":"page"},{"location":"release_notes/#[v0.0.1](https://github.com/odow/SDDP.jl/releases/tag/v0.0.1)-(April-18,-2018)","page":"Release notes","title":"v0.0.1 (April 18, 2018)","text":"","category":"section"},{"location":"release_notes/","page":"Release notes","title":"Release notes","text":"Initial release. Development had been underway since January 22, 2016 in the StochDualDynamicProgram.jl repository. The last development commit there was April 5, 2017. Work then continued in this repository for a year before the first tagged release.","category":"page"},{"location":"examples/asset_management_stagewise/","page":"Asset management with modifications","title":"Asset management with modifications","text":"EditURL = \"asset_management_stagewise.jl\"","category":"page"},{"location":"examples/asset_management_stagewise/#Asset-management-with-modifications","page":"Asset management with modifications","title":"Asset management with modifications","text":"","category":"section"},{"location":"examples/asset_management_stagewise/","page":"Asset management with modifications","title":"Asset management with modifications","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/asset_management_stagewise/","page":"Asset management with modifications","title":"Asset management with modifications","text":"A modified version of the Asset Management Problem Taken from the book J.R. Birge, F. Louveaux, Introduction to Stochastic Programming, Springer Series in Operations Research and Financial Engineering, Springer New York, New York, NY, 2011","category":"page"},{"location":"examples/asset_management_stagewise/","page":"Asset management with modifications","title":"Asset management with modifications","text":"using SDDP, HiGHS, Test\n\nfunction asset_management_stagewise(; cut_type)\n w_s = [1.25, 1.06]\n w_b = [1.14, 1.12]\n Phi = [-1, 5]\n Psi = [0.02, 0.0]\n\n model = SDDP.MarkovianPolicyGraph(;\n sense = :Max,\n transition_matrices = Array{Float64,2}[\n [1.0]',\n [0.5 0.5],\n [0.5 0.5; 0.5 0.5],\n [0.5 0.5; 0.5 0.5],\n ],\n upper_bound = 1000.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n t, i = node\n @variable(subproblem, xs >= 0, SDDP.State, initial_value = 0)\n @variable(subproblem, xb >= 0, SDDP.State, initial_value = 0)\n if t == 1\n @constraint(subproblem, xs.out + xb.out == 55 + xs.in + xb.in)\n @stageobjective(subproblem, 0)\n elseif t == 2 || t == 3\n @variable(subproblem, phi)\n @constraint(\n subproblem,\n w_s[i] * xs.in + w_b[i] * xb.in + phi == xs.out + xb.out\n )\n SDDP.parameterize(subproblem, [1, 2], [0.6, 0.4]) do ω\n JuMP.fix(phi, Phi[ω])\n @stageobjective(subproblem, Psi[ω] * xs.out)\n end\n else\n @variable(subproblem, u >= 0)\n @variable(subproblem, v >= 0)\n @constraint(\n subproblem,\n w_s[i] * xs.in + w_b[i] * xb.in + u - v == 80,\n )\n @stageobjective(subproblem, -4u + v)\n end\n end\n SDDP.train(\n model;\n cut_type = cut_type,\n log_frequency = 10,\n risk_measure = (node) -> begin\n if node[1] != 3\n SDDP.Expectation()\n else\n SDDP.EAVaR(; lambda = 0.5, beta = 0.5)\n end\n end,\n )\n @test SDDP.calculate_bound(model) ≈ 1.278 atol = 1e-3\n return\nend\n\nasset_management_stagewise(; cut_type = SDDP.SINGLE_CUT)\n\nasset_management_stagewise(; cut_type = SDDP.MULTI_CUT)","category":"page"},{"location":"guides/choose_a_stopping_rule/#Choose-a-stopping-rule","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"","category":"section"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"The theory of SDDP tells us that the algorithm converges to an optimal policy almost surely in a finite number of iterations. In practice, this number is very large. Therefore, we need some way of pre-emptively terminating SDDP when the solution is “good enough.” We call heuristics for pre-emptively terminating SDDP stopping rules.","category":"page"},{"location":"guides/choose_a_stopping_rule/#Basic-limits","page":"Choose a stopping rule","title":"Basic limits","text":"","category":"section"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"The training of an SDDP policy can be terminated after a fixed number of iterations using the iteration_limit keyword.","category":"page"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"SDDP.train(model; iteration_limit = 10)","category":"page"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"The training of an SDDP policy can be terminated after a fixed number of seconds using the time_limit keyword.","category":"page"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"SDDP.train(model; time_limit = 2.0)","category":"page"},{"location":"guides/choose_a_stopping_rule/#Stopping-rules","page":"Choose a stopping rule","title":"Stopping rules","text":"","category":"section"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"In addition to the limits provided as keyword arguments, a variety of other stopping rules are available. These can be passed to SDDP.train as a vector to the stopping_rules keyword. Training stops if any of the rules becomes active. To stop when all of the rules become active, use SDDP.StoppingChain. For example:","category":"page"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"# Terminate if BoundStalling becomes true\nSDDP.train(\n model;\n stopping_rules = [SDDP.BoundStalling(10, 1e-4)],\n)\n\n# Terminate if BoundStalling OR TimeLimit becomes true\nSDDP.train(\n model; \n stopping_rules = [SDDP.BoundStalling(10, 1e-4), SDDP.TimeLimit(100.0)],\n)\n\n# Terminate if BoundStalling AND TimeLimit becomes true\nSDDP.train(\n model; \n stopping_rules = [\n SDDP.StoppingChain(SDDP.BoundStalling(10, 1e-4), SDDP.TimeLimit(100.0)),\n ],\n)","category":"page"},{"location":"guides/choose_a_stopping_rule/","page":"Choose a stopping rule","title":"Choose a stopping rule","text":"See Stopping rules for a list of stopping rules supported by SDDP.jl.","category":"page"},{"location":"examples/belief/","page":"Partially observable inventory management","title":"Partially observable inventory management","text":"EditURL = \"belief.jl\"","category":"page"},{"location":"examples/belief/#Partially-observable-inventory-management","page":"Partially observable inventory management","title":"Partially observable inventory management","text":"","category":"section"},{"location":"examples/belief/","page":"Partially observable inventory management","title":"Partially observable inventory management","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/belief/","page":"Partially observable inventory management","title":"Partially observable inventory management","text":"using SDDP, HiGHS, Random, Statistics, Test\n\nfunction inventory_management_problem()\n demand_values = [1.0, 2.0]\n demand_prob = Dict(:Ah => [0.2, 0.8], :Bh => [0.8, 0.2])\n graph = SDDP.Graph(\n :root_node,\n [:Ad, :Ah, :Bd, :Bh],\n [\n (:root_node => :Ad, 0.5),\n (:root_node => :Bd, 0.5),\n (:Ad => :Ah, 1.0),\n (:Ah => :Ad, 0.8),\n (:Ah => :Bd, 0.1),\n (:Bd => :Bh, 1.0),\n (:Bh => :Bd, 0.8),\n (:Bh => :Ad, 0.1),\n ],\n )\n SDDP.add_ambiguity_set(graph, [:Ad, :Bd], 1e2)\n SDDP.add_ambiguity_set(graph, [:Ah, :Bh], 1e2)\n\n model = SDDP.PolicyGraph(\n graph;\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n @variables(\n subproblem,\n begin\n 0 <= inventory <= 2, (SDDP.State, initial_value = 0.0)\n buy >= 0\n demand\n end\n )\n @constraint(subproblem, demand == inventory.in - inventory.out + buy)\n if node == :Ad || node == :Bd || node == :D\n JuMP.fix(demand, 0)\n @stageobjective(subproblem, buy)\n else\n SDDP.parameterize(subproblem, demand_values, demand_prob[node]) do ω\n return JuMP.fix(demand, ω)\n end\n @stageobjective(subproblem, 2 * buy + inventory.out)\n end\n end\n # Train the policy.\n Random.seed!(123)\n SDDP.train(\n model;\n iteration_limit = 100,\n cut_type = SDDP.SINGLE_CUT,\n log_frequency = 10,\n parallel_scheme = SDDP.Serial(),\n )\n results = SDDP.simulate(model, 500; parallel_scheme = SDDP.Serial())\n objectives =\n [sum(s[:stage_objective] for s in simulation) for simulation in results]\n sample_mean = round(Statistics.mean(objectives); digits = 2)\n sample_ci = round(1.96 * Statistics.std(objectives) / sqrt(500); digits = 2)\n @test SDDP.calculate_bound(model) ≈ sample_mean atol = sample_ci\n return\nend\n\ninventory_management_problem()","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"EditURL = \"decision_hazard.jl\"","category":"page"},{"location":"tutorial/decision_hazard/#Here-and-now-and-hazard-decision","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"","category":"section"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"SDDP.jl assumes that the agent gets to make a decision after observing the realization of the random variable. This is called a wait-and-see or hazard-decision model. In contrast, you might want your agent to make decisions before observing the random variable. This is called a here-and-now or decision-hazard model.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"info: Info\nThe terms decision-hazard and hazard-decision from the French hasard, meaning chance. It could also have been translated as uncertainty-decision and decision-uncertainty, but the community seems to have settled on the transliteration hazard instead. We like the hazard-decision and decision-hazard terms because they clearly communicate the order of the decision and the uncertainty.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"The purpose of this tutorial is to demonstrate how to model here-and-now decisions in SDDP.jl.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"This tutorial uses the following packages:","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"using SDDP\nimport HiGHS","category":"page"},{"location":"tutorial/decision_hazard/#Hazard-decision-formulation","page":"Here-and-now and hazard-decision","title":"Hazard-decision formulation","text":"","category":"section"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"As an example, we're going to build a standard hydro-thermal scheduling model, with a single hydro-reservoir and a single thermal generation plant. In each of the four stages, we need to choose some mix of u_thermal and u_hydro to meet a demand of 9 units, where unmet demand is penalized at a rate of $500/unit.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"hazard_decision = SDDP.LinearPolicyGraph(;\n stages = 4,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, node\n @variables(sp, begin\n 0 <= x_storage <= 8, (SDDP.State, initial_value = 6)\n u_thermal >= 0\n u_hydro >= 0\n u_unmet_demand >= 0\n end)\n @constraint(sp, u_thermal + u_hydro == 9 - u_unmet_demand)\n @constraint(sp, c_balance, x_storage.out == x_storage.in - u_hydro + 0)\n SDDP.parameterize(sp, [2, 3]) do ω_inflow\n return set_normalized_rhs(c_balance, ω_inflow)\n end\n @stageobjective(sp, 500 * u_unmet_demand + 20 * u_thermal)\nend","category":"page"},{"location":"tutorial/decision_hazard/#Decision-hazard-formulation","page":"Here-and-now and hazard-decision","title":"Decision-hazard formulation","text":"","category":"section"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"In the wait-and-see formulation, we get to decide the generation variables after observing the realization of ω_inflow. However, a common modeling situation is that we need to decide the level of thermal generation u_thermal before observing the inflow.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"SDDP.jl can model here-and-now decisions with a modeling trick: a wait-and-see decision in stage t-1 is equivalent to a here-and-now decision in stage t.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"In other words, we need to convert the u_thermal decision from a control variable that is decided in stage t, to a state variable that is decided in stage t-1. Here's our new model, with the three lines that have changed:","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"decision_hazard = SDDP.LinearPolicyGraph(;\n stages = 4,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, node\n @variables(sp, begin\n 0 <= x_storage <= 8, (SDDP.State, initial_value = 6)\n u_thermal >= 0, (SDDP.State, initial_value = 0) # <-- changed\n u_hydro >= 0\n u_unmet_demand >= 0\n end)\n @constraint(sp, u_thermal.in + u_hydro == 9 - u_unmet_demand) # <-- changed\n @constraint(sp, c_balance, x_storage.out == x_storage.in - u_hydro + 0)\n SDDP.parameterize(sp, [2, 3]) do ω\n return set_normalized_rhs(c_balance, ω)\n end\n @stageobjective(sp, 500 * u_unmet_demand + 20 * u_thermal.in) # <-- changed\nend","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"Can you understand the reformulation? In each stage, we now use the value of u_thermal.in instead of u_thermal, and the value of the outgoing u_thermal.out is the here-and-how decision for stage t+1.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"(If you can spot a \"mistake\" with this model, don't worry, we'll fix it below. Presenting it like this simplifies the exposition.)","category":"page"},{"location":"tutorial/decision_hazard/#Comparison","page":"Here-and-now and hazard-decision","title":"Comparison","text":"","category":"section"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"Let's compare the cost of operating the two models:","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"function train_and_compute_cost(model)\n SDDP.train(model; print_level = 0)\n return println(\"Cost = \\$\", SDDP.calculate_bound(model))\nend\n\ntrain_and_compute_cost(hazard_decision)","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"train_and_compute_cost(decision_hazard)","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"This suggests that choosing the thermal generation before observing the inflow adds a cost of $250. But does this make sense?","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"If we look carefully at our decision_hazard model, the incoming value of u_thermal.in in the first stage is fixed to the initial_value of 0. Therefore, we must always meet the full demand with u_hydro, which we cannot do without incurring unmet demand.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"To allow the model to choose an optimal level of u_thermal in the first-stage, we need to add an extra stage that is deterministic with no stage objective.","category":"page"},{"location":"tutorial/decision_hazard/#Fixing-the-decision-hazard","page":"Here-and-now and hazard-decision","title":"Fixing the decision-hazard","text":"","category":"section"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"In the following model, we now have five stages, so that stage t+1 in decision_hazard_2 corresponds to stage t in decision_hazard. We've also added an if-statement, which adds different constraints depending on the node. Note that we need to add an x_storage.out == x_storage.in constraint because the storage can't change in this new first-stage.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"decision_hazard_2 = SDDP.LinearPolicyGraph(;\n stages = 5, # <-- changed\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, node\n @variables(sp, begin\n 0 <= x_storage <= 8, (SDDP.State, initial_value = 6)\n u_thermal >= 0, (SDDP.State, initial_value = 0)\n u_hydro >= 0\n u_unmet_demand >= 0\n end)\n if node == 1 # <-- new\n @constraint(sp, x_storage.out == x_storage.in) # <-- new\n @stageobjective(sp, 0) # <-- new\n else\n @constraint(sp, u_thermal.in + u_hydro == 9 - u_unmet_demand)\n @constraint(sp, c_balance, x_storage.out == x_storage.in - u_hydro + 0)\n SDDP.parameterize(sp, [2, 3]) do ω\n return set_normalized_rhs(c_balance, ω)\n end\n @stageobjective(sp, 500 * u_unmet_demand + 20 * u_thermal.in)\n end\nend\n\ntrain_and_compute_cost(decision_hazard_2)","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"Now we find that the cost of choosing the thermal generation before observing the inflow adds a much more reasonable cost of $10.","category":"page"},{"location":"tutorial/decision_hazard/#Summary","page":"Here-and-now and hazard-decision","title":"Summary","text":"","category":"section"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"To summarize, the difference between here-and-now and wait-and-see variables is a modeling choice.","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"To create a here-and-now decision, add it as a state variable to the previous stage","category":"page"},{"location":"tutorial/decision_hazard/","page":"Here-and-now and hazard-decision","title":"Here-and-now and hazard-decision","text":"In some cases, you'll need to add an additional \"first-stage\" problem to enable the model to choose an optimal value for the here-and-now decision variable. You do not need to do this if the first stage is deterministic. You must make sure that the subproblem is feasible for all possible incoming values of the here-and-now decision variable.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"EditURL = \"pglib_opf.jl\"","category":"page"},{"location":"tutorial/pglib_opf/#Alternative-forward-models","page":"Alternative forward models","title":"Alternative forward models","text":"","category":"section"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"This example demonstrates how to train convex and non-convex models.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"This example uses the following packages:","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"using SDDP\nimport Ipopt\nimport PowerModels\nimport Test","category":"page"},{"location":"tutorial/pglib_opf/#Formulation","page":"Alternative forward models","title":"Formulation","text":"","category":"section"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"For our model, we build a simple optimal power flow model with a single hydro-electric generator.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"The formulation of our optimal power flow problem depends on model_type, which must be one of the PowerModels formulations.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"(To run locally, download pglib_opf_case5_pjm.m and update filename appropriately.)","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"function build_model(model_type)\n filename = joinpath(@__DIR__, \"pglib_opf_case5_pjm.m\")\n data = PowerModels.parse_file(filename)\n return SDDP.PolicyGraph(\n SDDP.UnicyclicGraph(0.95);\n sense = :Min,\n lower_bound = 0.0,\n optimizer = Ipopt.Optimizer,\n ) do sp, t\n power_model = PowerModels.instantiate_model(\n data,\n model_type,\n PowerModels.build_opf;\n jump_model = sp,\n )\n # Now add hydro power models. Assume that generator 5 is hydro, and the\n # rest are thermal.\n pg = power_model.var[:it][:pm][:nw][0][:pg][5]\n sp[:pg] = pg\n @variable(sp, x >= 0, SDDP.State, initial_value = 10.0)\n @variable(sp, deficit >= 0)\n @constraint(sp, balance, x.out == x.in - pg + deficit)\n @stageobjective(sp, objective_function(sp) + 1e6 * deficit)\n SDDP.parameterize(sp, [0, 2, 5]) do ω\n return SDDP.set_normalized_rhs(balance, ω)\n end\n return\n end\nend","category":"page"},{"location":"tutorial/pglib_opf/#Training-a-convex-model","page":"Alternative forward models","title":"Training a convex model","text":"","category":"section"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"We can build and train a convex approximation of the optimal power flow problem.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"The problem with the convex model is that it does not accurately simulate the true dynamics of the problem. Therefore, it under-estimates the true cost of operation.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"convex = build_model(PowerModels.DCPPowerModel)\nSDDP.train(convex; iteration_limit = 10)","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"To more accurately simulate the dynamics of the problem, a common approach is to write the cuts representing the policy to a file, and then read them into a non-convex model:","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"SDDP.write_cuts_to_file(convex, \"convex.cuts.json\")\nnon_convex = build_model(PowerModels.ACPPowerModel)\nSDDP.read_cuts_from_file(non_convex, \"convex.cuts.json\")","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"Now we can simulate non_convex to evaluate the policy.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"result = SDDP.simulate(non_convex, 1)","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"A problem with reading and writing the cuts to file is that the cuts have been generated from trial points of the convex model. Therefore, the policy may be arbitrarily bad at points visited by the non-convex model.","category":"page"},{"location":"tutorial/pglib_opf/#Training-a-non-convex-model","page":"Alternative forward models","title":"Training a non-convex model","text":"","category":"section"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"We can also build and train a non-convex formulation of the optimal power flow problem.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"The problem with the non-convex model is that because it is non-convex, SDDP.jl may find a sub-optimal policy. Therefore, it may over-estimate the true cost of operation.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"non_convex = build_model(PowerModels.ACPPowerModel)\nSDDP.train(non_convex; iteration_limit = 10)\nresult = SDDP.simulate(non_convex, 1)","category":"page"},{"location":"tutorial/pglib_opf/#Combining-convex-and-non-convex-models","page":"Alternative forward models","title":"Combining convex and non-convex models","text":"","category":"section"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"To summarize, training with the convex model constructs cuts at points that may never be visited by the non-convex model, and training with the non-convex model may construct arbitrarily poor cuts because a key assumption of SDDP is convexity.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"As a compromise, we can train a policy using a combination of the convex and non-convex models; we'll use the non-convex model to generate trial points on the forward pass, and we'll use the convex model to build cuts on the backward pass.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"convex = build_model(PowerModels.DCPPowerModel)","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"non_convex = build_model(PowerModels.ACPPowerModel)","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"To do so, we train convex using the SDDP.AlternativeForwardPass forward pass, which simulates the model using non_convex, and we use SDDP.AlternativePostIterationCallback as a post-iteration callback, which copies cuts from the convex model back into the non_convex model.","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"SDDP.train(\n convex;\n forward_pass = SDDP.AlternativeForwardPass(non_convex),\n post_iteration_callback = SDDP.AlternativePostIterationCallback(non_convex),\n iteration_limit = 10,\n)","category":"page"},{"location":"tutorial/pglib_opf/","page":"Alternative forward models","title":"Alternative forward models","text":"In practice, if we were to simulate non_convex now, we should obtain a better policy than either of the two previous approaches.","category":"page"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = SDDP","category":"page"},{"location":"","page":"Home","title":"Home","text":"\"logo\"","category":"page"},{"location":"#Introduction","page":"Home","title":"Introduction","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"(Image: Build Status) (Image: code coverage)","category":"page"},{"location":"","page":"Home","title":"Home","text":"Welcome to SDDP.jl, a package for solving large convex multistage stochastic programming problems using stochastic dual dynamic programming.","category":"page"},{"location":"","page":"Home","title":"Home","text":"SDDP.jl is built on JuMP, so it supports a number of open-source and commercial solvers, making it a powerful and flexible tool for stochastic optimization.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The implementation of the stochastic dual dynamic programming algorithm in SDDP.jl is state of the art, and it includes support for a number of advanced features not commonly found in other implementations. This includes support for:","category":"page"},{"location":"","page":"Home","title":"Home","text":"infinite horizon problems\nconvex risk measures\nmixed-integer state and control variables\npartially observable stochastic processes.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Install SDDP.jl as follows:","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> import Pkg\n\njulia> Pkg.add(\"SDDP\")","category":"page"},{"location":"#License","page":"Home","title":"License","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"SDDP.jl is licensed under the MPL 2.0 license.","category":"page"},{"location":"#Resources-for-getting-started","page":"Home","title":"Resources for getting started","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"There are a few ways to get started with SDDP.jl:","category":"page"},{"location":"","page":"Home","title":"Home","text":"Become familiar with JuMP by reading the JuMP documentation\nRead the introductory tutorial An introduction to SDDP.jl\nBrowse some of the examples, such as Example: deterministic to stochastic","category":"page"},{"location":"#Getting-help","page":"Home","title":"Getting help","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"If you need help, please open a GitHub issue.","category":"page"},{"location":"#How-the-documentation-is-structured","page":"Home","title":"How the documentation is structured","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Having a high-level overview of how this documentation is structured will help you know where to look for certain things.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Tutorials contains step-by-step explanations of how to use SDDP.jl. Once you've got SDDP.jl installed, start by reading An introduction to SDDP.jl.\nGuides contains \"how-to\" snippets that demonstrate specific topics within SDDP.jl. A good one to get started on is Debug a model.\nExplanation contains step-by-step explanations of the theory and algorithms that underpin SDDP.jl. If you want a basic understanding of the algorithm behind SDDP.jl, start with Introductory theory.\nExamples contain worked examples of various problems solved using SDDP.jl. A good one to get started on is the Hydro-thermal scheduling problem. In particular, it shows how to solve an infinite horizon problem.\nThe API Reference contains a complete list of the functions you can use in SDDP.jl. Look here if you want to know how to use a particular function.","category":"page"},{"location":"#Citing-SDDP.jl","page":"Home","title":"Citing SDDP.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"If you use SDDP.jl, we ask that you please cite the following:","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{dowson_sddp.jl,\n\ttitle = {{SDDP}.jl: a {Julia} package for stochastic dual dynamic programming},\n\tjournal = {INFORMS Journal on Computing},\n\tauthor = {Dowson, O. and Kapelevich, L.},\n\tdoi = {https://doi.org/10.1287/ijoc.2020.0987},\n\tyear = {2021},\n\tvolume = {33},\n\tissue = {1},\n\tpages = {27-33},\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"Here is an earlier preprint.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you use the infinite horizon functionality, we ask that you please cite the following:","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{dowson_policy_graph,\n\ttitle = {The policy graph decomposition of multistage stochastic optimization problems},\n\tdoi = {https://doi.org/10.1002/net.21932},\n\tjournal = {Networks},\n\tauthor = {Dowson, O.},\n\tvolume = {76},\n\tissue = {1},\n\tpages = {3-23},\n\tyear = {2020}\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"Here is an earlier preprint.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you use the partially observable functionality, we ask that you please cite the following:","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{dowson_pomsp,\n\ttitle = {Partially observable multistage stochastic programming},\n\tdoi = {https://doi.org/10.1016/j.orl.2020.06.005},\n\tjournal = {Operations Research Letters},\n\tauthor = {Dowson, O. and Morton, D.P. and Pagnoncelli, B.K.},\n\tvolume = {48},\n\tissue = {4},\n\tpages = {505-512},\n\tyear = {2020}\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"Here is an earlier preprint.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you use the objective state functionality, we ask that you please cite the following:","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{downward_objective,\n\ttitle = {Stochastic dual dynamic programming with stagewise-dependent objective uncertainty},\n\tdoi = {https://doi.org/10.1016/j.orl.2019.11.002},\n\tjournal = {Operations Research Letters},\n\tauthor = {Downward, A. and Dowson, O. and Baucke, R.},\n\tvolume = {48},\n\tissue = {1},\n\tpages = {33-39},\n\tyear = {2020}\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"Here is an earlier preprint.","category":"page"},{"location":"","page":"Home","title":"Home","text":"If you use the entropic risk measure, we ask that you please cite the following:","category":"page"},{"location":"","page":"Home","title":"Home","text":"@article{dowson_entropic,\n\ttitle = {Incorporating convex risk measures into multistage stochastic programming algorithms},\n\tdoi = {https://doi.org/10.1007/s10479-022-04977-w},\n\tjournal = {Annals of Operations Research},\n\tauthor = {Dowson, O. and Morton, D.P. and Pagnoncelli, B.K.},\n\tyear = {2022},\n}","category":"page"},{"location":"","page":"Home","title":"Home","text":"Here is an earlier preprint.","category":"page"},{"location":"examples/all_blacks/","page":"Deterministic All Blacks","title":"Deterministic All Blacks","text":"EditURL = \"all_blacks.jl\"","category":"page"},{"location":"examples/all_blacks/#Deterministic-All-Blacks","page":"Deterministic All Blacks","title":"Deterministic All Blacks","text":"","category":"section"},{"location":"examples/all_blacks/","page":"Deterministic All Blacks","title":"Deterministic All Blacks","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/all_blacks/","page":"Deterministic All Blacks","title":"Deterministic All Blacks","text":"using SDDP, HiGHS, Test\n\nfunction all_blacks()\n # Number of time periods, number of seats, R_ij = revenue from selling seat\n # i at time j, offer_ij = whether an offer for seat i will come at time j\n (T, N, R, offer) = (3, 2, [3 3 6; 3 3 6], [1 1 0; 1 0 1])\n model = SDDP.LinearPolicyGraph(;\n stages = T,\n sense = :Max,\n upper_bound = 100.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, stage\n # Seat remaining?\n @variable(sp, 0 <= x[1:N] <= 1, SDDP.State, Bin, initial_value = 1)\n # Action: accept offer, or don't accept offer\n @variable(sp, accept_offer, Bin)\n # Balance on seats\n @constraint(\n sp,\n [i in 1:N],\n x[i].out == x[i].in - offer[i, stage] * accept_offer\n )\n @stageobjective(\n sp,\n sum(R[i, stage] * offer[i, stage] * accept_offer for i in 1:N)\n )\n end\n SDDP.train(model; duality_handler = SDDP.LagrangianDuality())\n @test SDDP.calculate_bound(model) ≈ 9.0\n return\nend\n\nall_blacks()","category":"page"},{"location":"examples/sldp_example_one/","page":"SLDP: example 1","title":"SLDP: example 1","text":"EditURL = \"sldp_example_one.jl\"","category":"page"},{"location":"examples/sldp_example_one/#SLDP:-example-1","page":"SLDP: example 1","title":"SLDP: example 1","text":"","category":"section"},{"location":"examples/sldp_example_one/","page":"SLDP: example 1","title":"SLDP: example 1","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/sldp_example_one/","page":"SLDP: example 1","title":"SLDP: example 1","text":"This example is derived from Section 4.2 of the paper: Ahmed, S., Cabral, F. G., & da Costa, B. F. P. (2019). Stochastic Lipschitz Dynamic Programming. Optimization Online. PDF","category":"page"},{"location":"examples/sldp_example_one/","page":"SLDP: example 1","title":"SLDP: example 1","text":"using SDDP, HiGHS, Test\n\nfunction sldp_example_one()\n model = SDDP.LinearPolicyGraph(;\n stages = 8,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n @variable(sp, x, SDDP.State, initial_value = 2.0)\n @variables(sp, begin\n x⁺ >= 0\n x⁻ >= 0\n 0 <= u <= 1, Bin\n ω\n end)\n @stageobjective(sp, 0.9^(t - 1) * (x⁺ + x⁻))\n @constraints(sp, begin\n x.out == x.in + 2 * u - 1 + ω\n x⁺ >= x.out\n x⁻ >= -x.out\n end)\n points = [\n -0.3089653673606697,\n -0.2718277412744214,\n -0.09611178608243474,\n 0.24645863921577763,\n 0.5204224537256875,\n ]\n return SDDP.parameterize(φ -> JuMP.fix(ω, φ), sp, [points; -points])\n end\n SDDP.train(model; log_frequency = 10)\n @test SDDP.calculate_bound(model) <= 1.1675\n return\nend\n\nsldp_example_one()","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/#Simulate-using-a-different-sampling-scheme","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"","category":"section"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"DocTestSetup = quote\n using SDDP, HiGHS\nend","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"By default, SDDP.simulate will simulate the policy using the distributions of noise terms that were defined when the model was created. We call these in-sample simulations. However, in general the in-sample distributions are an approximation of some underlying probability model which we term the true process. Therefore, SDDP.jl makes it easy to simulate the policy using different probability distributions.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"To demonstrate the different ways of simulating the policy, we're going to use the model from the tutorial Markovian policy graphs.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> using SDDP, HiGHS\n\njulia> Ω = [\n (inflow = 0.0, fuel_multiplier = 1.5),\n (inflow = 50.0, fuel_multiplier = 1.0),\n (inflow = 100.0, fuel_multiplier = 0.75),\n ]\n3-element Vector{@NamedTuple{inflow::Float64, fuel_multiplier::Float64}}:\n (inflow = 0.0, fuel_multiplier = 1.5)\n (inflow = 50.0, fuel_multiplier = 1.0)\n (inflow = 100.0, fuel_multiplier = 0.75)\n\njulia> model = SDDP.MarkovianPolicyGraph(\n transition_matrices = Array{Float64, 2}[\n [1.0]',\n [0.75 0.25],\n [0.75 0.25; 0.25 0.75],\n ],\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n # Unpack the stage and Markov index.\n t, markov_state = node\n # Define the state variable.\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Define the control variables.\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n # Define the constraints\n @constraints(subproblem, begin\n volume.out == volume.in + inflow - hydro_generation - hydro_spill\n thermal_generation + hydro_generation == 150.0\n end)\n # Note how we can use `markov_state` to dispatch an `if` statement.\n probability = if markov_state == 1 # wet climate state\n [1 / 6, 1 / 3, 1 / 2]\n else # dry climate state\n [1 / 2, 1 / 3, 1 / 6]\n end\n fuel_cost = [50.0, 100.0, 150.0]\n SDDP.parameterize(subproblem, Ω, probability) do ω\n JuMP.fix(inflow, ω.inflow)\n @stageobjective(\n subproblem,\n ω.fuel_multiplier * fuel_cost[t] * thermal_generation,\n )\n return\n end\n return\n end\nA policy graph with 5 nodes.\n Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)\n\n\njulia> SDDP.train(model; iteration_limit = 10, print_level = 0);","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/#In-sample-Monte-Carlo-simulation","page":"Simulate using a different sampling scheme","title":"In-sample Monte Carlo simulation","text":"","category":"section"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"To simulate the policy using the data defined when model was created, use SDDP.InSampleMonteCarlo.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> simulations = SDDP.simulate(\n model,\n 20;\n sampling_scheme = SDDP.InSampleMonteCarlo(),\n );\n\njulia> sort(unique([data[:noise_term] for sim in simulations for data in sim]))\n3-element Vector{@NamedTuple{inflow::Float64, fuel_multiplier::Float64}}:\n (inflow = 0.0, fuel_multiplier = 1.5)\n (inflow = 50.0, fuel_multiplier = 1.0)\n (inflow = 100.0, fuel_multiplier = 0.75)","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/#Out-of-sample-Monte-Carlo-simulation","page":"Simulate using a different sampling scheme","title":"Out-of-sample Monte Carlo simulation","text":"","category":"section"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"Instead of using the in-sample data, we can perform an out-of-sample simulation of the policy using the SDDP.OutOfSampleMonteCarlo sampling scheme.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"For each node, the SDDP.OutOfSampleMonteCarlo needs to define a new distribution for the transition probabilities between nodes in the policy graph, and a new distribution for the stagewise independent noise terms.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"note: Note\nThe support of the distribution for the stagewise independent noise terms does not have to be the same as the in-sample distributions.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> sampling_scheme = SDDP.OutOfSampleMonteCarlo(model) do node\n stage, markov_state = node\n if stage == 0\n # Called from the root node. Transition to (1, 1) with probability 1.0.\n # Only return the list of children, _not_ a list of noise terms.\n return [SDDP.Noise((1, 1), 1.0)]\n elseif stage == 3\n # Called from the final node. Return an empty list for the children,\n # and a single, deterministic realization for the noise terms.\n children = SDDP.Noise[]\n noise_terms = [SDDP.Noise((inflow = 75.0, fuel_multiplier = 1.2), 1.0)]\n return children, noise_terms\n else\n # Called from a normal node. Return the in-sample distribution for the\n # noise terms, but modify the transition probabilities so that the\n # Markov switching probability is now 50%.\n probability = markov_state == 1 ? [1/6, 1/3, 1/2] : [1/2, 1/3, 1/6]\n # Note: `Ω` is defined at the top of this page of documentation\n noise_terms = [SDDP.Noise(ω, p) for (ω, p) in zip(Ω, probability)]\n children = [\n SDDP.Noise((stage + 1, 1), 0.5), SDDP.Noise((stage + 1, 2), 0.5)\n ]\n return children, noise_terms\n end\n end;\n\njulia> simulations = SDDP.simulate(model, 1; sampling_scheme = sampling_scheme);\n\njulia> simulations[1][3][:noise_term]\n(inflow = 75.0, fuel_multiplier = 1.2)","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"Alternatively, if you only want to modify the stagewise independent noise terms, pass use_insample_transition = true.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> sampling_scheme = SDDP.OutOfSampleMonteCarlo(\n model;\n use_insample_transition = true\n ) do node\n stage, markov_state = node\n if stage == 3\n # Called from the final node. Return a single, deterministic\n # realization for the noise terms. Don't return the children because we\n # use the in-sample data.\n return [SDDP.Noise((inflow = 65.0, fuel_multiplier = 1.1), 1.0)]\n else\n # Called from a normal node. Return the in-sample distribution for the\n # noise terms. Don't return the children because we use the in-sample\n # data.\n probability = markov_state == 1 ? [1/6, 1/3, 1/2] : [1/2, 1/3, 1/6]\n # Note: `Ω` is defined at the top of this page of documentation\n return [SDDP.Noise(ω, p) for (ω, p) in zip(Ω, probability)]\n end\n end;\n\njulia> simulations = SDDP.simulate(model, 1; sampling_scheme = sampling_scheme);\n\njulia> simulations[1][3][:noise_term]\n(inflow = 65.0, fuel_multiplier = 1.1)","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/#Historical-simulation","page":"Simulate using a different sampling scheme","title":"Historical simulation","text":"","category":"section"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"Instead of performing a Monte Carlo simulation like the previous tutorials, we may want to simulate one particular sequence of noise realizations. This historical simulation can also be conducted by passing a SDDP.Historical sampling scheme to the sampling_scheme keyword of the SDDP.simulate function.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"We can confirm that the historical sequence of nodes was visited by querying the :node_index key of the simulation results.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> simulations = SDDP.simulate(\n model;\n sampling_scheme = SDDP.Historical(\n # Note: `Ω` is defined at the top of this page of documentation\n [((1, 1), Ω[1]), ((2, 2), Ω[3]), ((3, 1), Ω[2])],\n ),\n );\n\njulia> [stage[:node_index] for stage in simulations[1]]\n3-element Vector{Tuple{Int64, Int64}}:\n (1, 1)\n (2, 2)\n (3, 1)","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"You can also pass a vector of scenarios, which are sampled sequentially:","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> sampling_scheme = SDDP.Historical(\n [\n [\n ((1,1), (inflow = 65.0, fuel_multiplier = 1.1)),\n ((2,2), (inflow = 10.0, fuel_multiplier = 1.4)), # Can be out-of-sample\n ((3,1), (inflow = 65.0, fuel_multiplier = 1.1)),\n ],\n [\n ((1,1), (inflow = 65.0, fuel_multiplier = 1.1)),\n ((2,2), (inflow = 100.0, fuel_multiplier = 0.75)),\n ((3,1), (inflow = 0.0, fuel_multiplier = 1.5)),\n ],\n ],\n )\nA Historical sampler with 2 scenarios sampled sequentially.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"Or a vector of scenarios and a corresponding vector of probabilities so that the historical scenarios are sampled probabilistically:","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"julia> sampling_scheme = SDDP.Historical(\n [\n [\n ((1,1), (inflow = 65.0, fuel_multiplier = 1.1)),\n ((2,2), (inflow = 10.0, fuel_multiplier = 1.4)), # Can be out-of-sample\n ((3,1), (inflow = 65.0, fuel_multiplier = 1.1)),\n ],\n [\n ((1,1), (inflow = 65.0, fuel_multiplier = 1.1)),\n ((2,2), (inflow = 100.0, fuel_multiplier = 0.75)),\n ((3,1), (inflow = 0.0, fuel_multiplier = 1.5)),\n ],\n ],\n [0.3, 0.7],\n )\nA Historical sampler with 2 scenarios sampled probabilistically.","category":"page"},{"location":"guides/simulate_using_a_different_sampling_scheme/","page":"Simulate using a different sampling scheme","title":"Simulate using a different sampling scheme","text":"tip: Tip\nYour sample space doesn't have to be a NamedTuple. It an be any Julia type! Use a Vector if that is easier, or define your own struct.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"EditURL = \"first_steps.jl\"","category":"page"},{"location":"tutorial/first_steps/#An-introduction-to-SDDP.jl","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"SDDP.jl is a solver for multistage stochastic optimization problems. By multistage, we mean problems in which an agent makes a sequence of decisions over time. By stochastic, we mean that the agent is making decisions in the presence of uncertainty that is gradually revealed over the multiple stages.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"tip: Tip\nMultistage stochastic programming has a lot in common with fields like stochastic optimal control, approximate dynamic programming, Markov decision processes, and reinforcement learning. If it helps, you can think of SDDP as Q-learning in which we approximate the value function using linear programming duality.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"This tutorial is in two parts. First, it is an introduction to the background notation and theory we need, and second, it solves a simple multistage stochastic programming problem.","category":"page"},{"location":"tutorial/first_steps/#What-is-a-node?","page":"An introduction to SDDP.jl","title":"What is a node?","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"A common feature of multistage stochastic optimization problems is that they model an agent controlling a system over time. To simplify things initially, we're going to start by describing what happens at an instant in time at which the agent makes a decision. Only after this will we extend our problem to multiple stages and the notion of time.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"A node is a place at which the agent makes a decision.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"tip: Tip\nFor readers with a stochastic programming background, \"node\" is synonymous with \"stage\" in this section. However, for reasons that will become clear shortly, there can be more than one \"node\" per instant in time, which is why we prefer the term \"node\" over \"stage.\"","category":"page"},{"location":"tutorial/first_steps/#States,-controls,-and-random-variables","page":"An introduction to SDDP.jl","title":"States, controls, and random variables","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The system that we are modeling can be described by three types of variables.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"State variables track a property of the system over time.\nEach node has an associated incoming state variable (the value of the state at the start of the node), and an outgoing state variable (the value of the state at the end of the node).\nExamples of state variables include the volume of water in a reservoir, the number of units of inventory in a warehouse, or the spatial position of a moving vehicle.\nBecause state variables track the system over time, each node must have the same set of state variables.\nWe denote state variables by the letter x for the incoming state variable and x^prime for the outgoing state variable.\nControl variables are actions taken (implicitly or explicitly) by the agent within a node which modify the state variables.\nExamples of control variables include releases of water from the reservoir, sales or purchasing decisions, and acceleration or braking of the vehicle.\nControl variables are local to a node i, and they can differ between nodes. For example, some control variables may be available within certain nodes.\nWe denote control variables by the letter u.\nRandom variables are finite, discrete, exogenous random variables that the agent observes at the start of a node, before the control variables are decided.\nExamples of random variables include rainfall inflow into a reservoir, probabilistic perishing of inventory, and steering errors in a vehicle.\nRandom variables are local to a node i, and they can differ between nodes. For example, some nodes may have random variables, and some nodes may not.\nWe denote random variables by the Greek letter omega and the sample space from which they are drawn by Omega_i. The probability of sampling omega is denoted p_omega for simplicity.\nImportantly, the random variable associated with node i is independent of the random variables in all other nodes.","category":"page"},{"location":"tutorial/first_steps/#Dynamics","page":"An introduction to SDDP.jl","title":"Dynamics","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"In a node i, the three variables are related by a transition function, which maps the incoming state, the controls, and the random variables to the outgoing state as follows: x^prime = T_i(x u omega).","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"As a result of entering a node i with the incoming state x, observing random variable omega, and choosing control u, the agent incurs a cost C_i(x u omega). (If the agent is a maximizer, this can be a profit, or a negative cost.) We call C_i the stage objective.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"To choose their control variables in node i, the agent uses a decision rule u = pi_i(x omega), which is a function that maps the incoming state variable and observation of the random variable to a control u. This control must satisfy some feasibility requirements u in U_i(x omega).","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Here is a schematic which we can use to visualize a single node:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"(Image: Hazard-decision node)","category":"page"},{"location":"tutorial/first_steps/#Policy-graphs","page":"An introduction to SDDP.jl","title":"Policy graphs","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Now that we have a node, we need to connect multiple nodes together to form a multistage stochastic program. We call the graph created by connecting nodes together a policy graph.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The simplest type of policy graph is a linear policy graph. Here's a linear policy graph with three nodes:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"(Image: Linear policy graph)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Here we have dropped the notations inside each node and replaced them by a label (1, 2, and 3) to represent nodes i=1, i=2, and i=3.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"In addition to nodes 1, 2, and 3, there is also a root node (the circle), and three arcs. Each arc has an origin node and a destination node, like 1 => 2, and a corresponding probability of transitioning from the origin to the destination. Unless specified, we assume that the arc probabilities are uniform over the number of outgoing arcs. Thus, in this picture the arc probabilities are all 1.0.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"State variables flow long the arcs of the graph. Thus, the outgoing state variable x^prime from node 1 becomes the incoming state variable x to node 2, and so on.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"We denote the set of nodes by mathcalN, the root node by R, and the probability of transitioning from node i to node j by p_ij. (If no arc exists, then p_ij = 0.) We define the set of successors of node i as i^+ = j in mathcalN p_ij 0.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Each node in the graph corresponds to a place at which the agent makes a decision, and we call moments in time at which the agent makes a decision stages. By convention, we try to draw policy graphs from left-to-right, with the stages as columns. There can be more than one node in a stage! Here's an example of a structure we call Markovian policy graphs:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"(Image: Markovian policy graph)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Here each column represents a moment in time, the squiggly lines represent stochastic rainfall, and the rows represent the world in two discrete states: El Niño and La Niña. In the El Niño states, the distribution of the rainfall random variable is different to the distribution of the rainfall random variable in the La Niña states, and there is some switching probability between the two states that can be modelled by a Markov chain.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Moreover, policy graphs can have cycles! This allows them to model infinite horizon problems. Here's another example, taken from the paper Dowson (2020):","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"(Image: POWDer policy graph)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The columns represent time, and the rows represent different states of the world. In this case, the rows represent different prices that milk can be sold for at the end of each year. The squiggly lines denote a multivariate random variable that models the weekly amount of rainfall that occurs.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"note: Note\nThe sum of probabilities on the outgoing arcs of node i can be less than 1, i.e., sumlimits_jin i^+ p_ij le 1. What does this mean? One interpretation is that the probability is a discount factor. Another interpretation is that there is an implicit \"zero\" node that we have not modeled, with p_i0 = 1 - sumlimits_jin i^+ p_ij. This zero node has C_0(x u omega) = 0, and 0^+ = varnothing.","category":"page"},{"location":"tutorial/first_steps/#More-notation","page":"An introduction to SDDP.jl","title":"More notation","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Recall that each node i has a decision rule u = pi_i(x omega), which is a function that maps the incoming state variable and observation of the random variable to a control u.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The set of decision rules, with one element for each node in the policy graph, is called a policy.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The goal of the agent is to find a policy that minimizes the expected cost of starting at the root node with some initial condition x_R, and proceeding from node to node along the probabilistic arcs until they reach a node with no outgoing arcs (or it reaches an implicit \"zero\" node).","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"min_pi mathbbE_i in R^+ omega in Omega_iV_i^pi(x_R omega)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"where","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"V_i^pi(x omega) = C_i(x u omega) + mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"where u = pi_i(x omega) in U_i(x omega), and x^prime = T_i(x u omega).","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The expectations are a bit complicated, but they are equivalent to:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi) = sumlimits_j in i^+ p_ij sumlimits_varphi in Omega_j p_varphiV_j(x^prime varphi)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"An optimal policy is the set of decision rules that the agent can use to make decisions and achieve the smallest expected cost.","category":"page"},{"location":"tutorial/first_steps/#Assumptions","page":"An introduction to SDDP.jl","title":"Assumptions","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"warning: Warning\nThis section is important!","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The space of problems you can model with this framework is very large. Too large, in fact, for us to form tractable solution algorithms for! Stochastic dual dynamic programming requires the following assumptions in order to work:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Assumption 1: finite nodes","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"There is a finite number of nodes in mathcalN.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Assumption 2: finite random variables","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The sample space Omega_i is finite and discrete for each node iinmathcalN.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Assumption 3: convex problems","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Given fixed omega, C_i(x u omega) is a convex function, T_i(x u omega) is linear, and U_i(x u omega) is a non-empty, bounded convex set with respect to x and u.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Assumption 4: no infinite loops","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"For all loops in the policy graph, the product of the arc transition probabilities around the loop is strictly less than 1.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Assumption 5: relatively complete recourse","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"This is a technical but important assumption. See Relatively complete recourse for more details.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"note: Note\nSDDP.jl relaxes assumption (3) to allow for integer state and control variables, but we won't go into the details here. Assumption (4) essentially means that we obtain a discounted-cost solution for infinite-horizon problems, instead of an average-cost solution; see Dowson (2020) for details.","category":"page"},{"location":"tutorial/first_steps/#Dynamic-programming-and-subproblems","page":"An introduction to SDDP.jl","title":"Dynamic programming and subproblems","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Now that we have formulated our problem, we need some ways of computing optimal decision rules. One way is to just use a heuristic like \"choose a control randomly from the set of feasible controls.\" However, such a policy is unlikely to be optimal.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"A better way of obtaining an optimal policy is to use Bellman's principle of optimality, a.k.a Dynamic Programming, and define a recursive subproblem as follows:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"beginaligned\nV_i(x omega) = minlimits_barx x^prime u C_i(barx u omega) + mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi)\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x\nendaligned","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Our decision rule, pi_i(x omega), solves this optimization problem and returns a u^* corresponding to an optimal solution.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"note: Note\nWe add barx as a decision variable, along with the fishing constraint barx = x for two reasons: it makes it obvious that formulating a problem with x times u results in a bilinear program instead of a linear program (see Assumption 3), and it simplifies the implementation of the SDDP algorithm.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"These subproblems are very difficult to solve exactly, because they involve recursive optimization problems with lots of nested expectations.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Therefore, instead of solving them exactly, SDDP.jl works by iteratively approximating the expectation term of each subproblem, which is also called the cost-to-go term. For now, you don't need to understand the details, other than that there is a nasty cost-to-go term that we deal with behind-the-scenes.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The subproblem view of a multistage stochastic program is also important, because it provides a convenient way of communicating the different parts of the broader problem, and it is how we will communicate the problem to SDDP.jl. All we need to do is drop the cost-to-go term and fishing constraint, and define a new subproblem SP as:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"beginaligned\ntextttSP_i(x omega) minlimits_barx x^prime u C_i(barx u omega) \n x^prime = T_i(barx u omega) \n u in U_i(barx omega)\nendaligned","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"note: Note\nWhen we talk about formulating a subproblem with SDDP.jl, this is the formulation we mean.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"We've retained the transition function and uncertainty set because they help to motivate the different components of the subproblem. However, in general, the subproblem can be more general. A better (less restrictive) representation might be:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"beginaligned\ntextttSP_i(x omega) minlimits_barx x^prime u C_i(barx x^prime u omega) \n (barx x^prime u) in mathcalX_i(omega)\nendaligned","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Note that the outgoing state variable can appear in the objective, and we can add constraints involving the incoming and outgoing state variables. It should be obvious how to map between the two representations.","category":"page"},{"location":"tutorial/first_steps/#Example:-hydro-thermal-scheduling","page":"An introduction to SDDP.jl","title":"Example: hydro-thermal scheduling","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Hydrothermal scheduling is the most common application of stochastic dual dynamic programming. To illustrate some of the basic functionality of SDDP.jl, we implement a very simple model of the hydrothermal scheduling problem.","category":"page"},{"location":"tutorial/first_steps/#Problem-statement","page":"An introduction to SDDP.jl","title":"Problem statement","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"We consider the problem of scheduling electrical generation over three weeks in order to meet a known demand of 150 MWh in each week.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"There are two generators: a thermal generator, and a hydro generator. In each week, the agent needs to decide how much energy to generate from thermal, and how much energy to generate from hydro.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The thermal generator has a short-run marginal cost of $50/MWh in the first stage, $100/MWh in the second stage, and $150/MWh in the third stage.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The hydro generator has a short-run marginal cost of $0/MWh.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The hydro generator draws water from a reservoir which has a maximum capacity of 200 MWh. (Although water is usually measured in m³, we measure it in the energy-equivalent MWh to simplify things. In practice, there is a conversion function between m³ flowing throw the turbine and MWh.) At the start of the first time period, the reservoir is full.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"In addition to the ability to generate electricity by passing water through the hydroelectric turbine, the hydro generator can also spill water down a spillway (bypassing the turbine) in order to prevent the water from over-topping the dam. We assume that there is no cost of spillage.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"In addition to water leaving the reservoir, water that flows into the reservoir through rainfall or rivers are referred to as inflows. These inflows are uncertain, and are the cause of the main trade-off in hydro-thermal scheduling: the desire to use water now to generate cheap electricity, against the risk that future inflows will be low, leading to blackouts or expensive thermal generation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"For our simple model, we assume that the inflows can be modelled by a discrete distribution with the three outcomes given in the following table:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"ω 0 50 100\nP(ω) 1/3 1/3 1/3","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The value of the noise (the random variable) is observed by the agent at the start of each stage. This makes the problem a wait-and-see or hazard-decision formulation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The goal of the agent is to minimize the expected cost of generation over the three weeks.","category":"page"},{"location":"tutorial/first_steps/#Formulating-the-problem","page":"An introduction to SDDP.jl","title":"Formulating the problem","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Before going further, we need to load SDDP.jl:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"using SDDP","category":"page"},{"location":"tutorial/first_steps/#Graph-structure","page":"An introduction to SDDP.jl","title":"Graph structure","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"First, we need to identify the structure of the policy graph. From the problem statement, we want to model the problem over three weeks in weekly stages. Therefore, the policy graph is a linear graph with three stages:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"graph = SDDP.LinearGraph(3)","category":"page"},{"location":"tutorial/first_steps/#Building-the-subproblem","page":"An introduction to SDDP.jl","title":"Building the subproblem","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Next, we need to construct the associated subproblem for each node in graph. To do so, we need to provide SDDP.jl a function which takes two arguments. The first is subproblem::Model, which is an empty JuMP model. The second is node, which is the name of each node in the policy graph. If the graph is linear, SDDP defaults to naming the nodes using the integers in 1:T. Here's an example that we are going to flesh out over the next few paragraphs:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # ... stuff to go here ...\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"warning: Warning\nIf you use a different type of graph, node may be a type different to Int. For example, in SDDP.MarkovianGraph, node is a Tuple{Int,Int}.","category":"page"},{"location":"tutorial/first_steps/#State-variables","page":"An introduction to SDDP.jl","title":"State variables","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The first part of the subproblem we need to identify are the state variables. Since we only have one reservoir, there is only one state variable, volume, the volume of water in the reservoir [MWh].","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The volume had bounds of [0, 200], and the reservoir was full at the start of time, so x_R = 200.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"We add state variables to our subproblem using JuMP's @variable macro. However, in addition to the usual syntax, we also pass SDDP.State, and we need to provide the initial value (x_R) using the initial_value keyword.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The syntax for adding a state variable is a little obtuse, because volume is not single JuMP variable. Instead, volume is a struct with two fields, .in and .out, corresponding to the incoming and outgoing state variables respectively.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"note: Note\nWe don't need to add the fishing constraint barx = x; SDDP.jl does this automatically.","category":"page"},{"location":"tutorial/first_steps/#Control-variables","page":"An introduction to SDDP.jl","title":"Control variables","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The next part of the subproblem we need to identify are the control variables. The control variables for our problem are:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"thermal_generation: the quantity of energy generated from thermal [MWh/week]\nhydro_generation: the quantity of energy generated from hydro [MWh/week]\nhydro_spill: the volume of water spilled from the reservoir in each week [MWh/week]","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Each of these variables is non-negative.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"We add control variables to our subproblem as normal JuMP variables, using @variable or @variables:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Control variables\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n end)\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"tip: Tip\nModeling is an art, and a tricky part of that art is figuring out which variables are state variables, and which are control variables. A good rule is: if you need a value of a control variable in some future node to make a decision, it is a state variable instead.","category":"page"},{"location":"tutorial/first_steps/#Random-variables","page":"An introduction to SDDP.jl","title":"Random variables","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The next step is to identify any random variables. In our example, we had","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"inflow: the quantity of water that flows into the reservoir each week [MWh/week]","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"To add an uncertain variable to the model, we create a new JuMP variable inflow, and then call the function SDDP.parameterize. The SDDP.parameterize function takes three arguments: the subproblem, a vector of realizations, and a corresponding vector of probabilities.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Control variables\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n end)\n # Random variables\n @variable(subproblem, inflow)\n Ω = [0.0, 50.0, 100.0]\n P = [1 / 3, 1 / 3, 1 / 3]\n SDDP.parameterize(subproblem, Ω, P) do ω\n return JuMP.fix(inflow, ω)\n end\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Note how we use the JuMP function JuMP.fix to set the value of the inflow variable to ω.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"warning: Warning\nSDDP.parameterize can only be called once in each subproblem definition! If your random variable is multi-variate, read Add multi-dimensional noise terms.","category":"page"},{"location":"tutorial/first_steps/#Transition-function-and-constraints","page":"An introduction to SDDP.jl","title":"Transition function and constraints","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Now that we've identified our variables, we can define the transition function and the constraints.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"For our problem, the state variable is the volume of water in the reservoir. The volume of water decreases in response to water being used for hydro generation and spillage. So the transition function is: volume.out = volume.in - hydro_generation - hydro_spill + inflow. (Note how we use volume.in and volume.out to refer to the incoming and outgoing state variables.)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"There is also a constraint that the total generation must sum to 150 MWh.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Both the transition function and any additional constraint are added using JuMP's @constraint and @constraints macro.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Control variables\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n end)\n # Random variables\n @variable(subproblem, inflow)\n Ω = [0.0, 50.0, 100.0]\n P = [1 / 3, 1 / 3, 1 / 3]\n SDDP.parameterize(subproblem, Ω, P) do ω\n return JuMP.fix(inflow, ω)\n end\n # Transition function and constraints\n @constraints(\n subproblem,\n begin\n volume.out == volume.in - hydro_generation - hydro_spill + inflow\n demand_constraint, hydro_generation + thermal_generation == 150\n end\n )\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/#Objective-function","page":"An introduction to SDDP.jl","title":"Objective function","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Finally, we need to add an objective function using @stageobjective. The objective of the agent is to minimize the cost of thermal generation. This is complicated by a fuel cost that depends on the node.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"One possibility is to use an if statement on node to define the correct objective:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Control variables\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n end)\n # Random variables\n @variable(subproblem, inflow)\n Ω = [0.0, 50.0, 100.0]\n P = [1 / 3, 1 / 3, 1 / 3]\n SDDP.parameterize(subproblem, Ω, P) do ω\n return JuMP.fix(inflow, ω)\n end\n # Transition function and constraints\n @constraints(\n subproblem,\n begin\n volume.out == volume.in - hydro_generation - hydro_spill + inflow\n demand_constraint, hydro_generation + thermal_generation == 150\n end\n )\n # Stage-objective\n if node == 1\n @stageobjective(subproblem, 50 * thermal_generation)\n elseif node == 2\n @stageobjective(subproblem, 100 * thermal_generation)\n else\n @assert node == 3\n @stageobjective(subproblem, 150 * thermal_generation)\n end\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"A second possibility is to use an array of fuel costs, and use node to index the correct value:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"function subproblem_builder(subproblem::Model, node::Int)\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Control variables\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n end)\n # Random variables\n @variable(subproblem, inflow)\n Ω = [0.0, 50.0, 100.0]\n P = [1 / 3, 1 / 3, 1 / 3]\n SDDP.parameterize(subproblem, Ω, P) do ω\n return JuMP.fix(inflow, ω)\n end\n # Transition function and constraints\n @constraints(\n subproblem,\n begin\n volume.out == volume.in - hydro_generation - hydro_spill + inflow\n demand_constraint, hydro_generation + thermal_generation == 150\n end\n )\n # Stage-objective\n fuel_cost = [50, 100, 150]\n @stageobjective(subproblem, fuel_cost[node] * thermal_generation)\n return subproblem\nend","category":"page"},{"location":"tutorial/first_steps/#Constructing-the-model","page":"An introduction to SDDP.jl","title":"Constructing the model","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Now that we've written our subproblem, we need to construct the full model. For that, we're going to need a linear solver. Let's choose HiGHS:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"using HiGHS","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"warning: Warning\nIn larger problems, you should use a more robust commercial LP solver like Gurobi. Read Words of warning for more details.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Then, we can create a full model using SDDP.PolicyGraph, passing our subproblem_builder function as the first argument, and our graph as the second:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"model = SDDP.PolicyGraph(\n subproblem_builder,\n graph;\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"sense: the optimization sense. Must be :Min or :Max.\nlower_bound: you must supply a valid bound on the objective. For our problem, we know that we cannot incur a negative cost so $0 is a valid lower bound.\noptimizer: This is borrowed directly from JuMP's Model constructor: Model(HiGHS.Optimizer)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Because linear policy graphs are the most commonly used structure, we can use SDDP.LinearPolicyGraph instead of passing SDDP.LinearGraph(3) to SDDP.PolicyGraph.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"model = SDDP.LinearPolicyGraph(\n subproblem_builder;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"There is also the option is to use Julia's do syntax to avoid needing to define a subproblem_builder function separately:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"model = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, node\n # State variables\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Control variables\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n end)\n # Random variables\n @variable(subproblem, inflow)\n Ω = [0.0, 50.0, 100.0]\n P = [1 / 3, 1 / 3, 1 / 3]\n SDDP.parameterize(subproblem, Ω, P) do ω\n return JuMP.fix(inflow, ω)\n end\n # Transition function and constraints\n @constraints(\n subproblem,\n begin\n volume.out == volume.in - hydro_generation - hydro_spill + inflow\n demand_constraint, hydro_generation + thermal_generation == 150\n end\n )\n # Stage-objective\n if node == 1\n @stageobjective(subproblem, 50 * thermal_generation)\n elseif node == 2\n @stageobjective(subproblem, 100 * thermal_generation)\n else\n @assert node == 3\n @stageobjective(subproblem, 150 * thermal_generation)\n end\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"info: Info\nJulia's do syntax is just a different way of passing an anonymous function inner to some function outer which takes inner as the first argument. For example, given:outer(inner::Function, x, y) = inner(x, y)thenouter(1, 2) do x, y\n return x^2 + y^2\nendis equivalent to:outer((x, y) -> x^2 + y^2, 1, 2)For our purpose, inner is subproblem_builder, and outer is SDDP.PolicyGraph.","category":"page"},{"location":"tutorial/first_steps/#Training-a-policy","page":"An introduction to SDDP.jl","title":"Training a policy","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Now we have a model, which is a description of the policy graph, we need to train a policy. Models can be trained using the SDDP.train function. It accepts a number of keyword arguments. iteration_limit terminates the training after the provided number of iterations.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"SDDP.train(model; iteration_limit = 10)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"There's a lot going on in this printout! Let's break it down.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The first section, \"problem,\" gives some problem statistics. In this example there are 3 nodes, 1 state variable, and 27 scenarios (3^3). We haven't solved this problem before so there are no existing cuts.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The \"options\" section lists some options we are using to solve the problem. For more information on the numerical stability report, read the Numerical stability report section.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The \"subproblem structure\" section also needs explaining. This looks at all of the nodes in the policy graph and reports the minimum and maximum number of variables and each constraint type in the corresponding subproblem. In this case each subproblem has 7 variables and various numbers of different constraint types. Note that the exact numbers may not correspond to the formulation as you wrote it, because SDDP.jl adds some extra variables for the cost-to-go function.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Then comes the iteration log, which is the main part of the printout. It has the following columns:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"iteration: the SDDP iteration\nsimulation: the cost of the single forward pass simulation for that iteration. This value is stochastic and is not guaranteed to improve over time. However, it's useful to check that the units are reasonable, and that it is not deterministic if you intended for the problem to be stochastic, etc.\nbound: this is a lower bound (upper if maximizing) for the value of the optimal policy. This bound should be monotonically improving (increasing if minimizing, decreasing if maximizing), but in some cases it can temporarily worsen due to cut selection, especially in the early iterations of the algorithm.\ntime (s): the total number of seconds spent solving so far\nsolves: the total number of subproblem solves to date. This can be very large!\npid: the ID of the processor used to solve that iteration. This should be 1 unless you are using parallel computation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"In addition, if the first character of a line is †, then SDDP.jl experienced numerical issues during the solve, but successfully recovered.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"The printout finishes with some summary statistics:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"status: why did the solver stop?\ntotal time (s), best bound, and total solves are the values from the last iteration of the solve.\nsimulation ci: a confidence interval that estimates the quality of the policy from the Simulation column.\nnumeric issues: the number of iterations that experienced numerical issues.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"warning: Warning\nThe simulation ci result can be misleading if you run a small number of iterations, or if the initial simulations are very bad. On a more technical note, it is an in-sample simulation, which may not reflect the true performance of the policy. See Obtaining bounds for more details.","category":"page"},{"location":"tutorial/first_steps/#Obtaining-the-decision-rule","page":"An introduction to SDDP.jl","title":"Obtaining the decision rule","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"After training a policy, we can create a decision rule using SDDP.DecisionRule:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"rule = SDDP.DecisionRule(model; node = 1)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Then, to evaluate the decision rule, we use SDDP.evaluate:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"solution = SDDP.evaluate(\n rule;\n incoming_state = Dict(:volume => 150.0),\n noise = 50.0,\n controls_to_record = [:hydro_generation, :thermal_generation],\n)","category":"page"},{"location":"tutorial/first_steps/#Simulating-the-policy","page":"An introduction to SDDP.jl","title":"Simulating the policy","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Once you have a trained policy, you can also simulate it using SDDP.simulate. The return value from simulate is a vector with one element for each replication. Each element is itself a vector, with one element for each stage. Each element, corresponding to a particular stage in a particular replication, is a dictionary that records information from the simulation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"simulations = SDDP.simulate(\n # The trained model to simulate.\n model,\n # The number of replications.\n 100,\n # A list of names to record the values of.\n [:volume, :thermal_generation, :hydro_generation, :hydro_spill],\n)\n\nreplication = 1\nstage = 2\nsimulations[replication][stage]","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Ignore many of the entries for now; they will be relevant later.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"One element of interest is :volume.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"outgoing_volume = map(simulations[1]) do node\n return node[:volume].out\nend","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Another is :thermal_generation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"thermal_generation = map(simulations[1]) do node\n return node[:thermal_generation]\nend","category":"page"},{"location":"tutorial/first_steps/#Obtaining-bounds","page":"An introduction to SDDP.jl","title":"Obtaining bounds","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Because the optimal policy is stochastic, one common approach to quantify the quality of the policy is to construct a confidence interval for the expected cost by summing the stage objectives along each simulation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"objectives = map(simulations) do simulation\n return sum(stage[:stage_objective] for stage in simulation)\nend\n\nμ, ci = SDDP.confidence_interval(objectives)\nprintln(\"Confidence interval: \", μ, \" ± \", ci)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"This confidence interval is an estimate for an upper bound of the policy's quality. We can calculate the lower bound using SDDP.calculate_bound.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"println(\"Lower bound: \", SDDP.calculate_bound(model))","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"tip: Tip\nThe upper- and lower-bounds are reversed if maximizing, i.e., SDDP.calculate_bound. returns an upper bound.","category":"page"},{"location":"tutorial/first_steps/#Custom-recorders","page":"An introduction to SDDP.jl","title":"Custom recorders","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"In addition to simulating the primal values of variables, we can also pass custom recorder functions. Each of these functions takes one argument, the JuMP subproblem corresponding to each node. This function gets called after we have solved each node as we traverse the policy graph in the simulation.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"For example, the dual of the demand constraint (which we named demand_constraint) corresponds to the price we should charge for electricity, since it represents the cost of each additional unit of demand. To calculate this, we can go:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"simulations = SDDP.simulate(\n model,\n 1; ## Perform a single simulation\n custom_recorders = Dict{Symbol,Function}(\n :price => (sp::JuMP.Model) -> JuMP.dual(sp[:demand_constraint]),\n ),\n)\n\nprices = map(simulations[1]) do node\n return node[:price]\nend","category":"page"},{"location":"tutorial/first_steps/#Extracting-the-marginal-water-values","page":"An introduction to SDDP.jl","title":"Extracting the marginal water values","text":"","category":"section"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Finally, we can use SDDP.ValueFunction and SDDP.evaluate to obtain and evaluate the value function at different points in the state-space.","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"note: Note\nBy \"value function\" we mean mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi), not the function V_i(x omega).","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"First, we construct a value function from the first subproblem:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"V = SDDP.ValueFunction(model; node = 1)","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"Then we can evaluate V at a point:","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"cost, price = SDDP.evaluate(V, Dict(\"volume\" => 10))","category":"page"},{"location":"tutorial/first_steps/","page":"An introduction to SDDP.jl","title":"An introduction to SDDP.jl","text":"This returns the cost-to-go (cost), and the gradient of the cost-to-go function with respect to each state variable. Note that since we are minimizing, the price has a negative sign: each additional unit of water leads to a decrease in the expected long-run cost.","category":"page"},{"location":"examples/StochDynamicProgramming.jl_stock/","page":"StochDynamicProgramming: the stock problem","title":"StochDynamicProgramming: the stock problem","text":"EditURL = \"StochDynamicProgramming.jl_stock.jl\"","category":"page"},{"location":"examples/StochDynamicProgramming.jl_stock/#StochDynamicProgramming:-the-stock-problem","page":"StochDynamicProgramming: the stock problem","title":"StochDynamicProgramming: the stock problem","text":"","category":"section"},{"location":"examples/StochDynamicProgramming.jl_stock/","page":"StochDynamicProgramming: the stock problem","title":"StochDynamicProgramming: the stock problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/StochDynamicProgramming.jl_stock/","page":"StochDynamicProgramming: the stock problem","title":"StochDynamicProgramming: the stock problem","text":"This example comes from StochDynamicProgramming.jl.","category":"page"},{"location":"examples/StochDynamicProgramming.jl_stock/","page":"StochDynamicProgramming: the stock problem","title":"StochDynamicProgramming: the stock problem","text":"using SDDP, HiGHS, Test\n\nfunction stock_example()\n model = SDDP.PolicyGraph(\n SDDP.LinearGraph(5);\n lower_bound = -2,\n optimizer = HiGHS.Optimizer,\n ) do sp, stage\n @variable(sp, 0 <= state <= 1, SDDP.State, initial_value = 0.5)\n @variable(sp, 0 <= control <= 0.5)\n @variable(sp, ξ)\n @constraint(sp, state.out == state.in - control + ξ)\n SDDP.parameterize(sp, 0.0:1/30:0.3) do ω\n return JuMP.fix(ξ, ω)\n end\n @stageobjective(sp, (sin(3 * stage) - 1) * control)\n end\n SDDP.train(model; log_frequency = 10)\n @test SDDP.calculate_bound(model) ≈ -1.471 atol = 0.001\n simulation_results = SDDP.simulate(model, 1_000)\n @test length(simulation_results) == 1_000\n μ = SDDP.Statistics.mean(\n sum(data[:stage_objective] for data in simulation) for\n simulation in simulation_results\n )\n @test μ ≈ -1.471 atol = 0.05\n return\nend\n\nstock_example()","category":"page"},{"location":"examples/agriculture_mccardle_farm/","page":"The farm planning problem","title":"The farm planning problem","text":"EditURL = \"agriculture_mccardle_farm.jl\"","category":"page"},{"location":"examples/agriculture_mccardle_farm/#The-farm-planning-problem","page":"The farm planning problem","title":"The farm planning problem","text":"","category":"section"},{"location":"examples/agriculture_mccardle_farm/","page":"The farm planning problem","title":"The farm planning problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/agriculture_mccardle_farm/","page":"The farm planning problem","title":"The farm planning problem","text":"There are four stages. The first stage is a deterministic planning stage. The next three are wait-and-see operational stages. The uncertainty in the three operational stages is a Markov chain for weather. There are three Markov states: dry, normal, and wet.","category":"page"},{"location":"examples/agriculture_mccardle_farm/","page":"The farm planning problem","title":"The farm planning problem","text":"Inspired by R. McCardle, Farm management optimization. Masters thesis, University of Louisville, Louisville, Kentucky, United States of America (2009).","category":"page"},{"location":"examples/agriculture_mccardle_farm/","page":"The farm planning problem","title":"The farm planning problem","text":"All data, including short variable names, is taken from that thesis.","category":"page"},{"location":"examples/agriculture_mccardle_farm/","page":"The farm planning problem","title":"The farm planning problem","text":"using SDDP, HiGHS, Test\n\nfunction test_mccardle_farm_model()\n S = [ # cutting, stage\n 0 1 2\n 0 0 1\n 0 0 0\n ]\n t = [60, 60, 245] # days in period\n D = [210, 210, 858] # demand\n q = [ # selling price per bale\n [4.5 4.5 4.5; 4.5 4.5 4.5; 4.5 4.5 4.5],\n [5.5 5.5 5.5; 5.5 5.5 5.5; 5.5 5.5 5.5],\n [6.5 6.5 6.5; 6.5 6.5 6.5; 6.5 6.5 6.5],\n ]\n b = [ # predicted yield (bales/acres) from cutting i in weather j.\n 30 75 37.5\n 15 37.5 18.25\n 7.5 18.75 9.325\n ]\n w = 3000 # max storage\n C = [50 50 50; 50 50 50; 50 50 50] # cost to grow hay\n r = [ # Cost per bale of hay from cutting i during weather condition j.\n [5 5 5; 5 5 5; 5 5 5],\n [6 6 6; 6 6 6; 6 6 6],\n [7 7 7; 7 7 7; 7 7 7],\n ]\n M = 60.0 # max acreage for planting\n H = 0.0 # initial inventory\n V = [0.05, 0.05, 0.05] # inventory cost\n L = 3000.0 # max demand for hay\n\n graph = SDDP.MarkovianGraph([\n ones(Float64, 1, 1),\n [0.14 0.69 0.17],\n [0.14 0.69 0.17; 0.14 0.69 0.17; 0.14 0.69 0.17],\n [0.14 0.69 0.17; 0.14 0.69 0.17; 0.14 0.69 0.17],\n ])\n\n model = SDDP.PolicyGraph(\n graph;\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, index\n stage, weather = index\n # ===================== State Variables =====================\n # Area planted.\n @variable(subproblem, 0 <= acres <= M, SDDP.State, initial_value = M)\n @variable(\n subproblem,\n bales[i = 1:3] >= 0,\n SDDP.State,\n initial_value = (i == 1 ? H : 0)\n )\n # ===================== Variables =====================\n @variables(subproblem, begin\n buy[1:3] >= 0 # Quantity of bales to buy from each cutting.\n sell[1:3] >= 0 # Quantity of bales to sell from each cutting.\n eat[1:3] >= 0 # Quantity of bales to eat from each cutting.\n pen_p[1:3] >= 0 # Penalties\n pen_n[1:3] >= 0 # Penalties\n end)\n # ===================== Constraints =====================\n if stage == 1\n @constraint(subproblem, acres.out <= acres.in)\n @constraint(subproblem, [i = 1:3], bales[i].in == bales[i].out)\n else\n @expression(\n subproblem,\n cut_ex[c = 1:3],\n bales[c].in + buy[c] - eat[c] - sell[c] + pen_p[c] - pen_n[c]\n )\n @constraints(\n subproblem,\n begin\n # Cannot plant more land than previously cropped.\n acres.out <= acres.in\n # In each stage we need to meet demand.\n sum(eat) >= D[stage-1]\n # We can buy and sell other cuttings.\n bales[stage-1].out ==\n cut_ex[stage-1] + acres.in * b[stage-1, weather]\n [c = 1:3; c != stage - 1], bales[c].out == cut_ex[c]\n # There is some maximum storage.\n sum(bales[i].out for i in 1:3) <= w\n # We can only sell what is in storage.\n [c = 1:3], sell[c] <= bales[c].in\n # Maximum sales quantity.\n sum(sell) <= L\n end\n )\n end\n # ===================== Stage objective =====================\n if stage == 1\n @stageobjective(subproblem, 0.0)\n else\n @stageobjective(\n subproblem,\n 1000 * (sum(pen_p) + sum(pen_n)) +\n # cost of growing\n C[stage-1, weather] * acres.in +\n sum(\n # inventory cost\n V[stage-1] * bales[cutting].in * t[stage-1] +\n # purchase cost\n r[cutting][stage-1, weather] * buy[cutting] +\n # feed cost\n S[cutting, stage-1] * eat[cutting] -\n # sell reward\n q[cutting][stage-1, weather] * sell[cutting] for\n cutting in 1:3\n )\n )\n end\n return\n end\n SDDP.train(model)\n @test SDDP.termination_status(model) == :simulation_stopping\n @test SDDP.calculate_bound(model) ≈ 4074.1391 atol = 1e-5\nend\n\ntest_mccardle_farm_model()","category":"page"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"EditURL = \"vehicle_location.jl\"","category":"page"},{"location":"examples/vehicle_location/#Vehicle-location","page":"Vehicle location","title":"Vehicle location","text":"","category":"section"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"This problem is a version of the Ambulance dispatch problem. A hospital is located at 0 on the number line that stretches from 0 to 100. Ambulance bases are located at points 20, 40, 60, 80, and 100. When not responding to a call, Ambulances must be located at a base, or the hospital. In this example there are three ambulances.","category":"page"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"Example location:","category":"page"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"H B B B B B\n0 ---- 20 ---- 40 ---- 60 ---- 80 ---- 100","category":"page"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"Each stage, a call comes in from somewhere on the number line. The agent must decide which ambulance to dispatch. They pay the cost of twice the driving distance. If an ambulance is not dispatched in a stage, the ambulance can be relocated to a different base in preparation for future calls. This incurs a cost of the driving distance.","category":"page"},{"location":"examples/vehicle_location/","page":"Vehicle location","title":"Vehicle location","text":"using SDDP\nimport HiGHS\nimport Test\n\nfunction vehicle_location_model(duality_handler)\n hospital_location = 0\n bases = vcat(hospital_location, [20, 40, 60, 80, 100])\n vehicles = [1, 2, 3]\n requests = 0:10:100\n shift_cost(src, dest) = abs(src - dest)\n function dispatch_cost(base, request)\n return 2 * (abs(request - hospital_location) + abs(request - base))\n end\n # Initial state of emergency vehicles at bases. All ambulances start at the\n # hospital.\n initial_state(b, v) = b == hospital_location ? 1.0 : 0.0\n model = SDDP.LinearPolicyGraph(;\n stages = 10,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n # Current location of each vehicle at each base.\n @variable(\n sp,\n 0 <= location[b = bases, v = vehicles] <= 1,\n SDDP.State,\n initial_value = initial_state(b, v)\n )\n @variables(sp, begin\n # Which vehicle is dispatched?\n 0 <= dispatch[bases, vehicles] <= 1, Bin\n # Shifting vehicles between bases: [src, dest, vehicle]\n 0 <= shift[bases, bases, vehicles] <= 1, Bin\n end)\n # Flow of vehicles in and out of bases:\n @expression(\n sp,\n base_balance[b in bases, v in vehicles],\n location[b, v].in - dispatch[b, v] - sum(shift[b, :, v]) +\n sum(shift[:, b, v])\n )\n @constraints(\n sp,\n begin\n # Only one vehicle dispatched to call.\n sum(dispatch) == 1\n # Can only dispatch vehicle from base if vehicle is at that base.\n [b in bases, v in vehicles],\n dispatch[b, v] <= location[b, v].in\n # Can only shift vehicle if vehicle is at that src base.\n [b in bases, v in vehicles],\n sum(shift[b, :, v]) <= location[b, v].in\n # Can only shift vehicle if vehicle is not being dispatched.\n [b in bases, v in vehicles],\n sum(shift[b, :, v]) + dispatch[b, v] <= 1\n # Can't shift to same base.\n [b in bases, v in vehicles], shift[b, b, v] == 0\n # Update states for non-home/non-hospital bases.\n [b in bases[2:end], v in vehicles],\n location[b, v].out == base_balance[b, v]\n # Update states for home/hospital bases.\n [v in vehicles],\n location[hospital_location, v].out ==\n base_balance[hospital_location, v] + sum(dispatch[:, v])\n end\n )\n SDDP.parameterize(sp, requests) do request\n @stageobjective(\n sp,\n sum(\n # Distance to travel from base to emergency and then to hospital.\n dispatch[b, v] * dispatch_cost(b, request) +\n # Distance travelled by vehicles relocating bases.\n sum(\n shift_cost(b, dest) * shift[b, dest, v] for\n dest in bases\n ) for b in bases, v in vehicles\n )\n )\n end\n end\n if get(ARGS, 1, \"\") == \"--write\"\n # Run `$ julia vehicle_location.jl --write` to update the benchmark\n # model directory\n model_dir = joinpath(@__DIR__, \"..\", \"..\", \"..\", \"benchmarks\", \"models\")\n SDDP.write_to_file(\n model,\n joinpath(model_dir, \"vehicle_location.sof.json.gz\");\n test_scenarios = 100,\n )\n exit(0)\n end\n SDDP.train(\n model;\n iteration_limit = 20,\n log_frequency = 10,\n cut_deletion_minimum = 100,\n duality_handler = duality_handler,\n )\n Test.@test SDDP.calculate_bound(model) >= 1000\n return\nend\n\n# TODO(odow): find out why this fails\n# vehicle_location_model(SDDP.ContinuousConicDuality())","category":"page"},{"location":"guides/improve_computational_performance/#Improve-computational-performance","page":"Improve computational performance","title":"Improve computational performance","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"SDDP is a computationally intensive algorithm. Here are some suggestions for how the computational performance can be improved.","category":"page"},{"location":"guides/improve_computational_performance/#Numerical-stability-(again)","page":"Improve computational performance","title":"Numerical stability (again)","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"We've already discussed this in the Numerical stability section of Words of warning. But, it's so important that we're going to say it again: improving the problem scaling is one of the best ways to improve the numerical performance of your models.","category":"page"},{"location":"guides/improve_computational_performance/#Solver-selection","page":"Improve computational performance","title":"Solver selection","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"The majority of the solution time is spent inside the low-level solvers. Choosing which solver (and the associated settings) correctly can lead to big speed-ups.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"Choose a commercial solver.\nOptions include CPLEX, Gurobi, and Xpress. Using free solvers such as CLP and HiGHS isn't a viable approach for large problems.\nTry different solvers.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"Even commercial solvers can have wildly different solution times. We've seen models on which CPLEX was 50% fast than Gurobi, and vice versa.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"Experiment with different solver options.\nUsing the default settings is usually a good option. However, sometimes it can pay to change these. In particular, forcing solvers to use the dual simplex algorithm (e.g., Method=1 in Gurobi ) is usually a performance win.","category":"page"},{"location":"guides/improve_computational_performance/#Single-cut-vs.-multi-cut","page":"Improve computational performance","title":"Single-cut vs. multi-cut","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"There are two competing ways that cuts can be created in SDDP: single-cut and multi-cut. By default, SDDP.jl uses the single-cut version of SDDP.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"The performance of each method is problem-dependent. We recommend that you try both in order to see which one performs better. In general, the single-cut method works better when the number of realizations of the stagewise-independent random variable is large, whereas the multi-cut method works better on small problems. However, the multi-cut method can cause numerical stability problems, particularly if used in conjunction with objective or belief state variables.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"You can switch between the methods by passing the relevant flag to cut_type in SDDP.train.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"SDDP.train(model; cut_type = SDDP.SINGLE_CUT)\nSDDP.train(model; cut_type = SDDP.MULTI_CUT)","category":"page"},{"location":"guides/improve_computational_performance/#Parallelism","page":"Improve computational performance","title":"Parallelism","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"SDDP.jl can take advantage of the parallel nature of modern computers to solve problems across multiple cores.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"info: Info\nWe highly recommend that you read the Julia manual's section on parallel computing.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"You can start Julia from a command line with N processors using the -p flag:","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"julia -p N","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"Alternatively, you can use the Distributed.jl package:","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"using Distributed\nDistributed.addprocs(N)","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"warning: Warning\nWorkers DON'T inherit their parent's Pkg environment. Therefore, if you started Julia with --project=/path/to/environment (or if you activated an environment from the REPL), you will need to put the following at the top of your script:using Distributed\n@everywhere begin\n import Pkg\n Pkg.activate(\"/path/to/environment\")\nend","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"Currently SDDP.jl supports to parallel schemes, SDDP.Serial and SDDP.Asynchronous. Instances of these parallel schemes should be passed to the parallel_scheme argument of SDDP.train and SDDP.simulate.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"using SDDP, HiGHS\nmodel = SDDP.LinearPolicyGraph(\n stages = 2, lower_bound = 0, optimizer = HiGHS.Optimizer\n) do sp, t\n @variable(sp, x >= 0, SDDP.State, initial_value = 1)\n @stageobjective(sp, x.out)\nend\nSDDP.train(model; iteration_limit = 10, parallel_scheme = SDDP.Asynchronous())\nSDDP.simulate(model, 10; parallel_scheme = SDDP.Asynchronous())","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"There is a large overhead for using the asynchronous solver. Even if you choose asynchronous mode, SDDP.jl will start in serial mode while the initialization takes place. Therefore, in the log you will see that the initial iterations take place on the master thread (Proc. ID = 1), and it is only after while that the solve switches to full parallelism.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"info: Info\nBecause of the large data communication requirements (all cuts have to be shared with all other cores), the solution time will not scale linearly with the number of cores.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"info: Info\nGiven the same number of iterations, the policy obtained from asynchronous mode will be worse than the policy obtained from serial mode. However, the asynchronous solver can take significantly less time to compute the same number of iterations.","category":"page"},{"location":"guides/improve_computational_performance/#Data-movement","page":"Improve computational performance","title":"Data movement","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"By default, data defined on the master process is not made available to the workers. Therefore, a model like the following:","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"data = 1\nmodel = SDDP.LinearPolicyGraph(stages = 2, lower_bound = 0) do sp, t\n @variable(sp, x >= 0, SDDP.State, initial_value = data)\n @stageobjective(sp, x.out)\nend","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"will result in an UndefVarError error like UndefVarError: data not defined.","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"There are three solutions for this problem.","category":"page"},{"location":"guides/improve_computational_performance/#Option-1:-declare-data-inside-the-build-function","page":"Improve computational performance","title":"Option 1: declare data inside the build function","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"model = SDDP.LinearPolicyGraph(stages = 2) do sp, t\n data = 1\n @variable(sp, x >= 0, SDDP.State, initial_value = 1)\n @stageobjective(sp, x)\nend","category":"page"},{"location":"guides/improve_computational_performance/#Option-2:-use-@everywhere","page":"Improve computational performance","title":"Option 2: use @everywhere","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"@everywhere begin\n data = 1\nend\nmodel = SDDP.LinearPolicyGraph(stages = 2) do sp, t\n @variable(sp, x >= 0, SDDP.State, initial_value = 1)\n @stageobjective(sp, x)\nend","category":"page"},{"location":"guides/improve_computational_performance/#Option-3:-build-the-model-in-a-function","page":"Improve computational performance","title":"Option 3: build the model in a function","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"function build_model()\n data = 1\n return SDDP.LinearPolicyGraph(stages = 2) do sp, t\n @variable(sp, x >= 0, SDDP.State, initial_value = 1)\n @stageobjective(sp, x)\n end\nend\n\nmodel = build_model()","category":"page"},{"location":"guides/improve_computational_performance/#Initialization-hooks","page":"Improve computational performance","title":"Initialization hooks","text":"","category":"section"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"warning: Warning\nThis is important if you use Gurobi!","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"SDDP.Asynchronous accepts a pre-processing hook that is run on each worker process before the model is solved. The most useful situation is for solvers than need an initialization step. A good example is Gurobi, which can share an environment amongst all models on a worker. Notably, this environment cannot be shared amongst workers, so defining one environment at the top of a script will fail!","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"To initialize a new environment on each worker, use the following:","category":"page"},{"location":"guides/improve_computational_performance/","page":"Improve computational performance","title":"Improve computational performance","text":"SDDP.train(\n model;\n parallel_scheme = SDDP.Asynchronous() do m::SDDP.PolicyGraph\n env = Gurobi.Env()\n set_optimizer(m, () -> Gurobi.Optimizer(env))\n end,\n)","category":"page"},{"location":"examples/FAST_quickstart/","page":"FAST: the quickstart problem","title":"FAST: the quickstart problem","text":"EditURL = \"FAST_quickstart.jl\"","category":"page"},{"location":"examples/FAST_quickstart/#FAST:-the-quickstart-problem","page":"FAST: the quickstart problem","title":"FAST: the quickstart problem","text":"","category":"section"},{"location":"examples/FAST_quickstart/","page":"FAST: the quickstart problem","title":"FAST: the quickstart problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/FAST_quickstart/","page":"FAST: the quickstart problem","title":"FAST: the quickstart problem","text":"An implementation of the QuickStart example from FAST","category":"page"},{"location":"examples/FAST_quickstart/","page":"FAST: the quickstart problem","title":"FAST: the quickstart problem","text":"using SDDP, HiGHS, Test\n\nfunction fast_quickstart()\n model = SDDP.PolicyGraph(\n SDDP.LinearGraph(2);\n lower_bound = -5,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n @variable(sp, x >= 0, SDDP.State, initial_value = 0.0)\n if t == 1\n @stageobjective(sp, x.out)\n else\n @variable(sp, s >= 0)\n @constraint(sp, s <= x.in)\n SDDP.parameterize(sp, [2, 3]) do ω\n return JuMP.set_upper_bound(s, ω)\n end\n @stageobjective(sp, -2s)\n end\n end\n\n det = SDDP.deterministic_equivalent(model, HiGHS.Optimizer)\n set_silent(det)\n JuMP.optimize!(det)\n @test JuMP.objective_value(det) == -2\n\n SDDP.train(model; log_every_iteration = true)\n @test SDDP.calculate_bound(model) == -2\nend\n\nfast_quickstart()","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_3stages/","page":"StructDualDynProg: Problem 5.2, 3 stages","title":"StructDualDynProg: Problem 5.2, 3 stages","text":"EditURL = \"StructDualDynProg.jl_prob5.2_3stages.jl\"","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_3stages/#StructDualDynProg:-Problem-5.2,-3-stages","page":"StructDualDynProg: Problem 5.2, 3 stages","title":"StructDualDynProg: Problem 5.2, 3 stages","text":"","category":"section"},{"location":"examples/StructDualDynProg.jl_prob5.2_3stages/","page":"StructDualDynProg: Problem 5.2, 3 stages","title":"StructDualDynProg: Problem 5.2, 3 stages","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_3stages/","page":"StructDualDynProg: Problem 5.2, 3 stages","title":"StructDualDynProg: Problem 5.2, 3 stages","text":"This example comes from StochasticDualDynamicProgramming.jl.","category":"page"},{"location":"examples/StructDualDynProg.jl_prob5.2_3stages/","page":"StructDualDynProg: Problem 5.2, 3 stages","title":"StructDualDynProg: Problem 5.2, 3 stages","text":"using SDDP, HiGHS, Test\n\nfunction test_prob52_3stages()\n model = SDDP.LinearPolicyGraph(;\n stages = 3,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n n = 4\n m = 3\n i_c = [16, 5, 32, 2]\n C = [25, 80, 6.5, 160]\n T = [8760, 7000, 1500] / 8760\n D2 = [diff([0, 3919, 7329, 10315]) diff([0, 7086, 9004, 11169])]\n p2 = [0.9, 0.1]\n @variable(sp, x[i = 1:n] >= 0, SDDP.State, initial_value = 0.0)\n @variables(sp, begin\n y[1:n, 1:m] >= 0\n v[1:n] >= 0\n penalty >= 0\n ξ[j = 1:m]\n end)\n @constraints(sp, begin\n [i = 1:n], x[i].out == x[i].in + v[i]\n [i = 1:n], sum(y[i, :]) <= x[i].in\n [j = 1:m], sum(y[:, j]) + penalty >= ξ[j]\n end)\n @stageobjective(sp, i_c'v + C' * y * T + 1e5 * penalty)\n if t != 1 # no uncertainty in first stage\n SDDP.parameterize(sp, 1:size(D2, 2), p2) do ω\n for j in 1:m\n JuMP.fix(ξ[j], D2[j, ω])\n end\n end\n end\n if t == 3\n @constraint(sp, sum(v) == 0)\n end\n end\n\n det = SDDP.deterministic_equivalent(model, HiGHS.Optimizer)\n set_silent(det)\n JuMP.optimize!(det)\n @test JuMP.objective_value(det) ≈ 406712.49 atol = 0.1\n\n SDDP.train(model; log_frequency = 10)\n @test SDDP.calculate_bound(model) ≈ 406712.49 atol = 0.1\n return\nend\n\ntest_prob52_3stages()","category":"page"},{"location":"examples/infinite_horizon_hydro_thermal/","page":"Infinite horizon hydro-thermal","title":"Infinite horizon hydro-thermal","text":"EditURL = \"infinite_horizon_hydro_thermal.jl\"","category":"page"},{"location":"examples/infinite_horizon_hydro_thermal/#Infinite-horizon-hydro-thermal","page":"Infinite horizon hydro-thermal","title":"Infinite horizon hydro-thermal","text":"","category":"section"},{"location":"examples/infinite_horizon_hydro_thermal/","page":"Infinite horizon hydro-thermal","title":"Infinite horizon hydro-thermal","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/infinite_horizon_hydro_thermal/","page":"Infinite horizon hydro-thermal","title":"Infinite horizon hydro-thermal","text":"using SDDP, HiGHS, Test, Statistics\n\nfunction infinite_hydro_thermal(; cut_type)\n Ω = [\n (inflow = 0.0, demand = 7.5),\n (inflow = 5.0, demand = 5),\n (inflow = 10.0, demand = 2.5),\n ]\n graph = SDDP.Graph(\n :root_node,\n [:week],\n [(:root_node => :week, 1.0), (:week => :week, 0.9)],\n )\n model = SDDP.PolicyGraph(\n graph;\n lower_bound = 0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n @variable(\n subproblem,\n 5.0 <= reservoir <= 15.0,\n SDDP.State,\n initial_value = 10.0\n )\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n spill >= 0\n inflow\n demand\n end)\n @constraints(\n subproblem,\n begin\n reservoir.out == reservoir.in - hydro_generation - spill + inflow\n hydro_generation + thermal_generation == demand\n end\n )\n @stageobjective(subproblem, 10 * spill + thermal_generation)\n SDDP.parameterize(subproblem, Ω) do ω\n JuMP.fix(inflow, ω.inflow)\n return JuMP.fix(demand, ω.demand)\n end\n end\n SDDP.train(\n model;\n cut_type = cut_type,\n log_frequency = 100,\n sampling_scheme = SDDP.InSampleMonteCarlo(; terminate_on_cycle = true),\n parallel_scheme = SDDP.Serial(),\n cycle_discretization_delta = 0.1,\n )\n @test SDDP.calculate_bound(model) ≈ 119.167 atol = 0.1\n\n results = SDDP.simulate(model, 500)\n objectives =\n [sum(s[:stage_objective] for s in simulation) for simulation in results]\n sample_mean = round(Statistics.mean(objectives); digits = 2)\n sample_ci = round(1.96 * Statistics.std(objectives) / sqrt(500); digits = 2)\n println(\"Confidence_interval = $(sample_mean) ± $(sample_ci)\")\n @test sample_mean - sample_ci <= 119.167 <= sample_mean + sample_ci\n return\nend\n\ninfinite_hydro_thermal(; cut_type = SDDP.SINGLE_CUT)\ninfinite_hydro_thermal(; cut_type = SDDP.MULTI_CUT)","category":"page"},{"location":"apireference/#api_reference_list","page":"API Reference","title":"API Reference","text":"","category":"section"},{"location":"apireference/#Policy-graphs","page":"API Reference","title":"Policy graphs","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.Graph\nSDDP.add_node\nSDDP.add_edge\nSDDP.add_ambiguity_set\nSDDP.LinearGraph\nSDDP.MarkovianGraph\nSDDP.UnicyclicGraph\nSDDP.LinearPolicyGraph\nSDDP.MarkovianPolicyGraph\nSDDP.PolicyGraph","category":"page"},{"location":"apireference/#SDDP.Graph","page":"API Reference","title":"SDDP.Graph","text":"Graph(root_node::T) where T\n\nCreate an empty graph struture with the root node root_node.\n\nExample\n\njulia> graph = SDDP.Graph(0)\nRoot\n 0\nNodes\n {}\nArcs\n {}\n\njulia> graph = SDDP.Graph(:root)\nRoot\n root\nNodes\n {}\nArcs\n {}\n\njulia> graph = SDDP.Graph((0, 0))\nRoot\n (0, 0)\nNodes\n {}\nArcs\n {}\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.add_node","page":"API Reference","title":"SDDP.add_node","text":"add_node(graph::Graph{T}, node::T) where {T}\n\nAdd a node to the graph graph.\n\nExamples\n\njulia> graph = SDDP.Graph(:root);\n\njulia> SDDP.add_node(graph, :A)\n\njulia> graph\nRoot\n root\nNodes\n A\nArcs\n {}\n\njulia> graph = SDDP.Graph(0);\n\njulia> SDDP.add_node(graph, 2)\n\njulia> graph\nRoot\n 0\nNodes\n 2\nArcs\n {}\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.add_edge","page":"API Reference","title":"SDDP.add_edge","text":"add_edge(graph::Graph{T}, edge::Pair{T, T}, probability::Float64) where {T}\n\nAdd an edge to the graph graph.\n\nExamples\n\njulia> graph = SDDP.Graph(0);\n\njulia> SDDP.add_node(graph, 1)\n\njulia> SDDP.add_edge(graph, 0 => 1, 0.9)\n\njulia> graph\nRoot\n 0\nNodes\n 1\nArcs\n 0 => 1 w.p. 0.9\n\njulia> graph = SDDP.Graph(:root);\n\njulia> SDDP.add_node(graph, :A)\n\njulia> SDDP.add_edge(graph, :root => :A, 1.0)\n\njulia> graph\nRoot\n root\nNodes\n A\nArcs\n root => A w.p. 1.0\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.add_ambiguity_set","page":"API Reference","title":"SDDP.add_ambiguity_set","text":"add_ambiguity_set(\n graph::Graph{T},\n set::Vector{T},\n lipschitz::Vector{Float64},\n) where {T}\n\nAdd set to the belief partition of graph.\n\nlipschitz is a vector of Lipschitz constants, with one element for each node in set. The Lipschitz constant is the maximum slope of the cost-to-go function with respect to the belief state associated with each node at any point in the state-space.\n\nExamples\n\njulia> graph = SDDP.LinearGraph(3)\nRoot\n 0\nNodes\n 1\n 2\n 3\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 3 w.p. 1.0\n\njulia> SDDP.add_ambiguity_set(graph, [1, 2], [1e3, 1e2])\n\njulia> SDDP.add_ambiguity_set(graph, [3], [1e5])\n\njulia> graph\nRoot\n 0\nNodes\n 1\n 2\n 3\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 3 w.p. 1.0\nPartitions\n {1, 2}\n {3}\n\n\n\n\n\nadd_ambiguity_set(graph::Graph{T}, set::Vector{T}, lipschitz::Float64)\n\nAdd set to the belief partition of graph.\n\nlipschitz is a Lipschitz constant for each node in set. The Lipschitz constant is the maximum slope of the cost-to-go function with respect to the belief state associated with each node at any point in the state-space.\n\nExamples\n\njulia> graph = SDDP.LinearGraph(3);\n\njulia> SDDP.add_ambiguity_set(graph, [1, 2], 1e3)\n\njulia> SDDP.add_ambiguity_set(graph, [3], 1e5)\n\njulia> graph\nRoot\n 0\nNodes\n 1\n 2\n 3\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 3 w.p. 1.0\nPartitions\n {1, 2}\n {3}\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.LinearGraph","page":"API Reference","title":"SDDP.LinearGraph","text":"LinearGraph(stages::Int)\n\nCreate a linear graph with stages number of nodes.\n\nExamples\n\njulia> graph = SDDP.LinearGraph(3)\nRoot\n 0\nNodes\n 1\n 2\n 3\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 3 w.p. 1.0\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.MarkovianGraph","page":"API Reference","title":"SDDP.MarkovianGraph","text":"MarkovianGraph(transition_matrices::Vector{Matrix{Float64}})\n\nConstruct a Markovian graph from the vector of transition matrices.\n\ntransition_matrices[t][i, j] gives the probability of transitioning from Markov state i in stage t - 1 to Markov state j in stage t.\n\nThe dimension of the first transition matrix should be (1, N), and transition_matrics[1][1, i] is the probability of transitioning from the root node to the Markov state i.\n\nExamples\n\njulia> graph = SDDP.MarkovianGraph([ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]])\nRoot\n (0, 1)\nNodes\n (1, 1)\n (2, 1)\n (2, 2)\n (3, 1)\n (3, 2)\nArcs\n (0, 1) => (1, 1) w.p. 1.0\n (1, 1) => (2, 1) w.p. 0.5\n (1, 1) => (2, 2) w.p. 0.5\n (2, 1) => (3, 1) w.p. 0.8\n (2, 1) => (3, 2) w.p. 0.2\n (2, 2) => (3, 1) w.p. 0.2\n (2, 2) => (3, 2) w.p. 0.8\n\n\n\n\n\nMarkovianGraph(;\n stages::Int,\n transition_matrix::Matrix{Float64},\n root_node_transition::Vector{Float64},\n)\n\nConstruct a Markovian graph object with stages number of stages and time-independent Markov transition probabilities.\n\ntransition_matrix must be a square matrix, and the probability of transitioning from Markov state i in stage t to Markov state j in stage t + 1 is given by transition_matrix[i, j].\n\nroot_node_transition[i] is the probability of transitioning from the root node to Markov state i in the first stage.\n\nExamples\n\njulia> graph = SDDP.MarkovianGraph(;\n stages = 3,\n transition_matrix = [0.8 0.2; 0.2 0.8],\n root_node_transition = [0.5, 0.5],\n )\nRoot\n (0, 1)\nNodes\n (1, 1)\n (1, 2)\n (2, 1)\n (2, 2)\n (3, 1)\n (3, 2)\nArcs\n (0, 1) => (1, 1) w.p. 0.5\n (0, 1) => (1, 2) w.p. 0.5\n (1, 1) => (2, 1) w.p. 0.8\n (1, 1) => (2, 2) w.p. 0.2\n (1, 2) => (2, 1) w.p. 0.2\n (1, 2) => (2, 2) w.p. 0.8\n (2, 1) => (3, 1) w.p. 0.8\n (2, 1) => (3, 2) w.p. 0.2\n (2, 2) => (3, 1) w.p. 0.2\n (2, 2) => (3, 2) w.p. 0.8\n\n\n\n\n\nMarkovianGraph(\n simulator::Function;\n budget::Union{Int,Vector{Int}},\n scenarios::Int = 1000,\n)\n\nConstruct a Markovian graph by fitting Markov chain to scenarios generated by simulator().\n\nbudget is the total number of nodes in the resulting Markov chain. This can either be specified as a single Int, in which case we will attempt to intelligently distributed the nodes between stages. Alternatively, budget can be a Vector{Int}, which details the number of Markov state to have in each stage.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.UnicyclicGraph","page":"API Reference","title":"SDDP.UnicyclicGraph","text":"UnicyclicGraph(discount_factor::Float64; num_nodes::Int = 1)\n\nConstruct a graph composed of num_nodes nodes that form a single cycle, with a probability of discount_factor of continuing the cycle.\n\nExamples\n\njulia> graph = SDDP.UnicyclicGraph(0.9; num_nodes = 2)\nRoot\n 0\nNodes\n 1\n 2\nArcs\n 0 => 1 w.p. 1.0\n 1 => 2 w.p. 1.0\n 2 => 1 w.p. 0.9\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.LinearPolicyGraph","page":"API Reference","title":"SDDP.LinearPolicyGraph","text":"LinearPolicyGraph(builder::Function; stages::Int, kwargs...)\n\nCreate a linear policy graph with stages number of stages.\n\nKeyword arguments\n\nstages: the number of stages in the graph\nkwargs: other keyword arguments are passed to SDDP.PolicyGraph.\n\nExamples\n\njulia> SDDP.LinearPolicyGraph(; stages = 2, lower_bound = 0.0) do sp, t\n # ... build model ...\nend\nA policy graph with 2 nodes.\nNode indices: 1, 2\n\nis equivalent to\n\njulia> graph = SDDP.LinearGraph(2);\n\njulia> SDDP.PolicyGraph(graph; lower_bound = 0.0) do sp, t\n # ... build model ...\nend\nA policy graph with 2 nodes.\nNode indices: 1, 2\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.MarkovianPolicyGraph","page":"API Reference","title":"SDDP.MarkovianPolicyGraph","text":"MarkovianPolicyGraph(\n builder::Function;\n transition_matrices::Vector{Array{Float64,2}},\n kwargs...\n)\n\nCreate a Markovian policy graph based on the transition matrices given in transition_matrices.\n\nKeyword arguments\n\ntransition_matrices[t][i, j] gives the probability of transitioning from Markov state i in stage t - 1 to Markov state j in stage t. The dimension of the first transition matrix should be (1, N), and transition_matrics[1][1, i] is the probability of transitioning from the root node to the Markov state i.\nkwargs: other keyword arguments are passed to SDDP.PolicyGraph.\n\nSee also\n\nSee SDDP.MarkovianGraph for other ways of specifying a Markovian policy graph.\n\nSee SDDP.PolicyGraph for the other keyword arguments.\n\nExamples\n\njulia> SDDP.MarkovianPolicyGraph(;\n transition_matrices = [ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]],\n lower_bound = 0.0,\n ) do sp, node\n # ... build model ...\n end\nA policy graph with 5 nodes.\n Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)\n\nis equivalent to\n\njulia> graph = SDDP.MarkovianGraph([ones(1, 1), [0.5 0.5], [0.8 0.2; 0.2 0.8]]);\n\njulia> SDDP.PolicyGraph(graph; lower_bound = 0.0) do sp, t\n # ... build model ...\nend\nA policy graph with 5 nodes.\n Node indices: (1, 1), (2, 1), (2, 2), (3, 1), (3, 2)\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.PolicyGraph","page":"API Reference","title":"SDDP.PolicyGraph","text":"PolicyGraph(\n builder::Function,\n graph::Graph{T};\n sense::Symbol = :Min,\n lower_bound = -Inf,\n upper_bound = Inf,\n optimizer = nothing,\n) where {T}\n\nConstruct a policy graph based on the graph structure of graph. (See SDDP.Graph for details.)\n\nKeyword arguments\n\nsense: whether we are minimizing (:Min) or maximizing (:Max).\nlower_bound: if mimimizing, a valid lower bound for the cost to go in all subproblems.\nupper_bound: if maximizing, a valid upper bound for the value to go in all subproblems.\noptimizer: the optimizer to use for each of the subproblems\n\nExamples\n\nfunction builder(subproblem::JuMP.Model, index)\n # ... subproblem definition ...\nend\n\nmodel = PolicyGraph(\n builder,\n graph;\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n)\n\nOr, using the Julia do ... end syntax:\n\nmodel = PolicyGraph(\n graph;\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, index\n # ... subproblem definitions ...\nend\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Subproblem-definition","page":"API Reference","title":"Subproblem definition","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"@stageobjective\nSDDP.parameterize\nSDDP.add_objective_state\nSDDP.objective_state\nSDDP.Noise","category":"page"},{"location":"apireference/#SDDP.@stageobjective","page":"API Reference","title":"SDDP.@stageobjective","text":"@stageobjective(subproblem, expr)\n\nSet the stage-objective of subproblem to expr.\n\nExamples\n\n@stageobjective(subproblem, 2x + y)\n\n\n\n\n\n","category":"macro"},{"location":"apireference/#SDDP.parameterize","page":"API Reference","title":"SDDP.parameterize","text":"parameterize(\n modify::Function,\n subproblem::JuMP.Model,\n realizations::Vector{T},\n probability::Vector{Float64} = fill(1.0 / length(realizations))\n) where {T}\n\nAdd a parameterization function modify to subproblem. The modify function takes one argument and modifies subproblem based on the realization of the noise sampled from realizations with corresponding probabilities probability.\n\nIn order to conduct an out-of-sample simulation, modify should accept arguments that are not in realizations (but still of type T).\n\nExamples\n\nSDDP.parameterize(subproblem, [1, 2, 3], [0.4, 0.3, 0.3]) do ω\n JuMP.set_upper_bound(x, ω)\nend\n\n\n\n\n\nparameterize(node::Node, noise)\n\nParameterize node node with the noise noise.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.add_objective_state","page":"API Reference","title":"SDDP.add_objective_state","text":"add_objective_state(update::Function, subproblem::JuMP.Model; kwargs...)\n\nAdd an objective state variable to subproblem.\n\nRequired kwargs are:\n\ninitial_value: The initial value of the objective state variable at the root node.\nlipschitz: The lipschitz constant of the objective state variable.\n\nSetting a tight value for the lipschitz constant can significantly improve the speed of convergence.\n\nOptional kwargs are:\n\nlower_bound: A valid lower bound for the objective state variable. Can be -Inf.\nupper_bound: A valid upper bound for the objective state variable. Can be +Inf.\n\nSetting tight values for these optional variables can significantly improve the speed of convergence.\n\nIf the objective state is N-dimensional, each keyword argument must be an NTuple{N,Float64}. For example, initial_value = (0.0, 1.0).\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.objective_state","page":"API Reference","title":"SDDP.objective_state","text":"objective_state(subproblem::JuMP.Model)\n\nReturn the current objective state of the problem.\n\nCan only be called from SDDP.parameterize.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.Noise","page":"API Reference","title":"SDDP.Noise","text":"Noise(support, probability)\n\nAn atom of a discrete random variable at the point of support support and associated probability probability.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Training-the-policy","page":"API Reference","title":"Training the policy","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.numerical_stability_report\nSDDP.train\nSDDP.termination_status\nSDDP.write_cuts_to_file\nSDDP.read_cuts_from_file\nSDDP.write_log_to_csv\nSDDP.set_numerical_difficulty_callback","category":"page"},{"location":"apireference/#SDDP.numerical_stability_report","page":"API Reference","title":"SDDP.numerical_stability_report","text":"numerical_stability_report(\n [io::IO = stdout,]\n model::PolicyGraph;\n by_node::Bool = false,\n print::Bool = true,\n warn::Bool = true,\n)\n\nPrint a report identifying possible numeric stability issues.\n\nKeyword arguments\n\nIf by_node, print a report for each node in the graph.\nIf print, print to io.\nIf warn, warn if the coefficients may cause numerical issues.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.train","page":"API Reference","title":"SDDP.train","text":"SDDP.train(model::PolicyGraph; kwargs...)\n\nTrain the policy for model.\n\nKeyword arguments\n\niteration_limit::Int: number of iterations to conduct before termination.\ntime_limit::Float64: number of seconds to train before termination.\nstoping_rules: a vector of SDDP.AbstractStoppingRules. Defaults to SimulationStoppingRule.\nprint_level::Int: control the level of printing to the screen. Defaults to 1. Set to 0 to disable all printing.\nlog_file::String: filepath at which to write a log of the training progress. Defaults to SDDP.log.\nlog_frequency::Int: control the frequency with which the logging is outputted (iterations/log). It must be at least 1. Defaults to 1.\nlog_every_seconds::Float64: control the frequency with which the logging is outputted (seconds/log). Defaults to 0.0.\nlog_every_iteration::Bool; over-rides log_frequency and log_every_seconds to force every iteration to be printed. Defaults to false.\nrun_numerical_stability_report::Bool: generate (and print) a numerical stability report prior to solve. Defaults to true.\nrefine_at_similar_nodes::Bool: if SDDP can detect that two nodes have the same children, it can cheaply add a cut discovered at one to the other. In almost all cases this should be set to true.\ncut_deletion_minimum::Int: the minimum number of cuts to cache before deleting cuts from the subproblem. The impact on performance is solver specific; however, smaller values result in smaller subproblems (and therefore quicker solves), at the expense of more time spent performing cut selection.\nrisk_measure: the risk measure to use at each node. Defaults to Expectation.\nroot_node_risk_measure::AbstractRiskMeasure: the risk measure to use at the root node when computing the Bound column. Note that the choice of this option does not change the primal policy, and it applies only if the transition from the root node to the first stage is stochastic. Defaults to Expectation.\nsampling_scheme: a sampling scheme to use on the forward pass of the algorithm. Defaults to InSampleMonteCarlo.\nbackward_sampling_scheme: a backward pass sampling scheme to use on the backward pass of the algorithm. Defaults to CompleteSampler.\ncut_type: choose between SDDP.SINGLE_CUT and SDDP.MULTI_CUT versions of SDDP.\ndashboard::Bool: open a visualization of the training over time. Defaults to false.\nparallel_scheme::AbstractParallelScheme: specify a scheme for solving in parallel. Defaults to Threaded().\nforward_pass::AbstractForwardPass: specify a scheme to use for the forward passes.\nforward_pass_resampling_probability::Union{Nothing,Float64}: set to a value in (0, 1) to enable RiskAdjustedForwardPass. Defaults to nothing (disabled).\nadd_to_existing_cuts::Bool: set to true to allow training a model that was previously trained. Defaults to false.\nduality_handler::AbstractDualityHandler: specify a duality handler to use when creating cuts.\npost_iteration_callback::Function: a callback with the signature post_iteration_callback(::IterationResult) that is evaluated after each iteration of the algorithm.\n\nThere is also a special option for infinite horizon problems\n\ncycle_discretization_delta: the maximum distance between states allowed on the forward pass. This is for advanced users only and needs to be used in conjunction with a different sampling_scheme.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.termination_status","page":"API Reference","title":"SDDP.termination_status","text":"termination_status(model::PolicyGraph)::Symbol\n\nQuery the reason why the training stopped.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.write_cuts_to_file","page":"API Reference","title":"SDDP.write_cuts_to_file","text":"write_cuts_to_file(\n model::PolicyGraph{T},\n filename::String;\n kwargs...,\n) where {T}\n\nWrite the cuts that form the policy in model to filename in JSON format.\n\nKeyword arguments\n\nnode_name_parser is a function which converts the name of each node into a string representation. It has the signature: node_name_parser(::T)::String.\nwrite_only_selected_cuts write only the selected cuts to the json file. Defaults to false.\n\nSee also SDDP.read_cuts_from_file.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.read_cuts_from_file","page":"API Reference","title":"SDDP.read_cuts_from_file","text":"read_cuts_from_file(\n model::PolicyGraph{T},\n filename::String;\n kwargs...,\n) where {T}\n\nRead cuts (saved using SDDP.write_cuts_to_file) from filename into model.\n\nSince T can be an arbitrary Julia type, the conversion to JSON is lossy. When reading, read_cuts_from_file only supports T=Int, T=NTuple{N, Int}, and T=Symbol. If you have manually created a policy graph with a different node type T, provide a function node_name_parser with the signature\n\nKeyword arguments\n\nnode_name_parser(T, name::String)::T where {T} that returns the name of each node given the string name name. If node_name_parser returns nothing, those cuts are skipped.\ncut_selection::Bool run or not the cut selection algorithm when adding the cuts to the model.\n\nSee also SDDP.write_cuts_to_file.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.write_log_to_csv","page":"API Reference","title":"SDDP.write_log_to_csv","text":"write_log_to_csv(model::PolicyGraph, filename::String)\n\nWrite the log of the most recent training to a csv for post-analysis.\n\nAssumes that the model has been trained via SDDP.train.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.set_numerical_difficulty_callback","page":"API Reference","title":"SDDP.set_numerical_difficulty_callback","text":"set_numerical_difficulty_callback(\n model::PolicyGraph,\n callback::Function,\n)\n\nSet a callback function callback(::PolicyGraph, ::Node; require_dual::Bool) that is run when the optimizer terminates without finding a primal solution (and dual solution if require_dual is true).\n\nDefault callback\n\nThe default callback is a small variation of:\n\nfunction callback(::PolicyGraph, node::Node; require_dual::Bool)\n MOI.Utilities.reset_optimizer(node.subproblem)\n optimize!(node.subproblem)\n return\nend\n\nThis callback is the default because a common issue is solvers declaring the infeasible because of numerical issues related to the large number of cutting planes. Resetting the subproblem–-and therefore starting from a fresh problem instead of warm-starting from the previous solution–-is often enough to fix the problem and allow more iterations.\n\nOther callbacks\n\nIn cases where the problem is truely infeasible (not because of numerical issues ), it may be helpful to write out the irreducible infeasible subsystem (IIS) for debugging. For this use-case, use a callback as follows:\n\nfunction callback(::PolicyGraph, node::Node; require_dual::Bool)\n JuMP.compute_conflict!(node.suprobblem)\n status = JuMP.get_attribute(node.subproblem, MOI.ConflictStatus())\n if status == MOI.CONFLICT_FOUND\n iis_model, _ = JuMP.copy_conflict(node.subproblem)\n print(iis_model)\n end\n return\nend\nSDDP.set_numerical_difficulty_callback(model, callback)\n\n\n\n\n\n","category":"function"},{"location":"apireference/#api_stopping_rules","page":"API Reference","title":"Stopping rules","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.AbstractStoppingRule\nSDDP.stopping_rule_status\nSDDP.convergence_test\nSDDP.IterationLimit\nSDDP.TimeLimit\nSDDP.Statistical\nSDDP.BoundStalling\nSDDP.StoppingChain\nSDDP.SimulationStoppingRule\nSDDP.FirstStageStoppingRule","category":"page"},{"location":"apireference/#SDDP.AbstractStoppingRule","page":"API Reference","title":"SDDP.AbstractStoppingRule","text":"AbstractStoppingRule\n\nThe abstract type for the stopping-rule interface.\n\nYou need to define the following methods:\n\nSDDP.stopping_rule_status\nSDDP.convergence_test\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.stopping_rule_status","page":"API Reference","title":"SDDP.stopping_rule_status","text":"stopping_rule_status(::AbstractStoppingRule)::Symbol\n\nReturn a symbol describing the stopping rule.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.convergence_test","page":"API Reference","title":"SDDP.convergence_test","text":"convergence_test(\n model::PolicyGraph,\n log::Vector{Log},\n ::AbstractStoppingRule,\n)::Bool\n\nReturn a Bool indicating if the algorithm should terminate the training.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.IterationLimit","page":"API Reference","title":"SDDP.IterationLimit","text":"IterationLimit(limit::Int)\n\nTeriminate the algorithm after limit number of iterations.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.TimeLimit","page":"API Reference","title":"SDDP.TimeLimit","text":"TimeLimit(limit::Float64)\n\nTeriminate the algorithm after limit seconds of computation.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.Statistical","page":"API Reference","title":"SDDP.Statistical","text":"Statistical(;\n num_replications::Int,\n iteration_period::Int = 1,\n z_score::Float64 = 1.96,\n verbose::Bool = true,\n disable_warning::Bool = false,\n)\n\nPerform an in-sample Monte Carlo simulation of the policy with num_replications replications every iteration_periods and terminate if the deterministic bound (lower if minimizing) falls into the confidence interval for the mean of the simulated cost.\n\nIf verbose = true, print the confidence interval.\n\nIf disable_warning = true, disable the warning telling you not to use this stopping rule (see below).\n\nWhy this stopping rule is not good\n\nThis stopping rule is one of the most common stopping rules seen in the literature. Don't follow the crowd. It is a poor choice for your model, and should be rarely used. Instead, you should use the default stopping rule, or use a fixed limit like a time or iteration limit.\n\nTo understand why this stopping rule is a bad idea, assume we have conducted num_replications simulations and the objectives are in a vector objectives::Vector{Float64}.\n\nOur mean is μ = mean(objectives) and the half-width of the confidence interval is w = z_score * std(objectives) / sqrt(num_replications).\n\nMany papers suggest terminating the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) is contained within the confidence interval. That is, if μ - w <= bound <= μ + w. Even worse, some papers define an optimization gap of (μ + w) / bound (if minimizing) or (μ - w) / bound (if maximizing), and they terminate once the gap is less than a value like 1%.\n\nBoth of these approaches are misleading, and more often than not, they will result in terminating with a sub-optimal policy that performs worse than expected. There are two main reasons for this:\n\nThe half-width depends on the number of replications. To reduce the computational cost, users are often tempted to choose a small number of replications. This increases the half-width and makes it more likely that the algorithm will stop early. But if we choose a large number of replications, then the computational cost is high, and we would have been better off to run a fixed number of iterations and use that computational time to run extra training iterations.\nThe confidence interval assumes that the simulated values are normally distributed. In infinite horizon models, this is almost never the case. The distribution is usually closer to exponential or log-normal.\n\nThere is a third, more technical reason which relates to the conditional dependence of constructing multiple confidence intervals.\n\nThe default value of z_score = 1.96 corresponds to a 95% confidence interval. You should interpret the interval as \"if we re-run this simulation 100 times, then the true mean will lie in the confidence interval 95 times out of 100.\" But if the bound is within the confidence interval, then we know the true mean cannot be better than the bound. Therfore, there is a more than 95% chance that the mean is within the interval.\n\nA separate problem arises if we simulate, find that the bound is outside the confidence interval, keep training, and then re-simulate to compute a new confidence interval. Because we will terminate when the bound enters the confidence interval, the repeated construction of a confidence interval means that the unconditional probability that we terminate with a false positive is larger than 5% (there are now more chances that the sample mean is optimistic and that the confidence interval includes the bound but not the true mean). One fix is to simulate with a sequentially increasing number of replicates, so that the unconditional probability stays at 95%, but this runs into the problem of computational cost. For more information on sequential sampling, see, for example, Güzin Bayraksan, David P. Morton, (2011) A Sequential Sampling Procedure for Stochastic Programming. Operations Research 59(4):898-913.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.BoundStalling","page":"API Reference","title":"SDDP.BoundStalling","text":"BoundStalling(num_previous_iterations::Int, tolerance::Float64)\n\nTeriminate the algorithm once the deterministic bound (lower if minimizing, upper if maximizing) fails to improve by more than tolerance in absolute terms for more than num_previous_iterations consecutve iterations, provided it has improved relative to the bound after the first iteration.\n\nChecking for an improvement relative to the first iteration avoids early termination in a situation where the bound fails to improve for the first N iterations. This frequently happens in models with a large number of stages, where it takes time for the cuts to propogate backward enough to modify the bound of the root node.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.StoppingChain","page":"API Reference","title":"SDDP.StoppingChain","text":"StoppingChain(rules::AbstractStoppingRule...)\n\nTerminate once all of the rules are statified.\n\nThis stopping rule short-circuits, so subsequent rules are only tested if the previous pass.\n\nExamples\n\nA stopping rule that runs 100 iterations, then checks for the bound stalling:\n\nStoppingChain(IterationLimit(100), BoundStalling(5, 0.1))\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.SimulationStoppingRule","page":"API Reference","title":"SDDP.SimulationStoppingRule","text":"SimulationStoppingRule(;\n sampling_scheme::AbstractSamplingScheme = SDDP.InSampleMonteCarlo(),\n replications::Int = -1,\n period::Int = -1,\n distance_tol::Float64 = 1e-2,\n bound_tol::Float64 = 1e-4,\n)\n\nTerminate the algorithm using a mix of heuristics. Unless you know otherwise, this is typically a good default.\n\nTermination criteria\n\nFirst, we check that the deterministic bound has stabilized. That is, over the last five iterations, the deterministic bound has changed by less than an absolute or relative tolerance of bound_tol.\n\nThen, if we have not done one in the last period iterations, we perform a primal simulation of the policy using replications out-of-sample realizations from sampling_scheme. The realizations are stored and re-used in each simulation. From each simulation, we record the value of the stage objective. We terminate the policy if each of the trajectories in two consecutive simulations differ by less than distance_tol.\n\nBy default, replications and period are -1, and SDDP.jl will guess good values for these. Over-ride the default behavior by setting an appropriate value.\n\nExample\n\nSDDP.train(model; stopping_rules = [SimulationStoppingRule()])\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.FirstStageStoppingRule","page":"API Reference","title":"SDDP.FirstStageStoppingRule","text":"FirstStageStoppingRule(; atol::Float64 = 1e-3, iterations::Int = 50)\n\nTerminate the algorithm when the outgoing values of the first-stage state variables have not changed by more than atol for iterations number of consecutive iterations.\n\nExample\n\nSDDP.train(model; stopping_rules = [FirstStageStoppingRule()])\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Sampling-schemes","page":"API Reference","title":"Sampling schemes","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.AbstractSamplingScheme\nSDDP.sample_scenario\nSDDP.InSampleMonteCarlo\nSDDP.OutOfSampleMonteCarlo\nSDDP.Historical\nSDDP.PSRSamplingScheme\nSDDP.SimulatorSamplingScheme","category":"page"},{"location":"apireference/#SDDP.AbstractSamplingScheme","page":"API Reference","title":"SDDP.AbstractSamplingScheme","text":"AbstractSamplingScheme\n\nThe abstract type for the sampling-scheme interface.\n\nYou need to define the following methods:\n\nSDDP.sample_scenario\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.sample_scenario","page":"API Reference","title":"SDDP.sample_scenario","text":"sample_scenario(graph::PolicyGraph{T}, ::AbstractSamplingScheme) where {T}\n\nSample a scenario from the policy graph graph based on the sampling scheme.\n\nReturns ::Tuple{Vector{Tuple{T, <:Any}}, Bool}, where the first element is the scenario, and the second element is a Boolean flag indicating if the scenario was terminated due to the detection of a cycle.\n\nThe scenario is a list of tuples (type Vector{Tuple{T, <:Any}}) where the first component of each tuple is the index of the node, and the second component is the stagewise-independent noise term observed in that node.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.InSampleMonteCarlo","page":"API Reference","title":"SDDP.InSampleMonteCarlo","text":"InSampleMonteCarlo(;\n max_depth::Int = 0,\n terminate_on_cycle::Function = false,\n terminate_on_dummy_leaf::Function = true,\n rollout_limit::Function = (i::Int) -> typemax(Int),\n initial_node::Any = nothing,\n)\n\nA Monte Carlo sampling scheme using the in-sample data from the policy graph definition.\n\nIf terminate_on_cycle, terminate the forward pass once a cycle is detected. If max_depth > 0, return once max_depth nodes have been sampled. If terminate_on_dummy_leaf, terminate the forward pass with 1 - probability of sampling a child node.\n\nNote that if terminate_on_cycle = false and terminate_on_dummy_leaf = false then max_depth must be set > 0.\n\nControl which node the trajectories start from using initial_node. If it is left as nothing, the root node is used as the starting node.\n\nYou can use rollout_limit to set iteration specific depth limits. For example:\n\nInSampleMonteCarlo(rollout_limit = i -> 2 * i)\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.OutOfSampleMonteCarlo","page":"API Reference","title":"SDDP.OutOfSampleMonteCarlo","text":"OutOfSampleMonteCarlo(\n f::Function,\n graph::PolicyGraph;\n use_insample_transition::Bool = false,\n max_depth::Int = 0,\n terminate_on_cycle::Bool = false,\n terminate_on_dummy_leaf::Bool = true,\n rollout_limit::Function = i -> typemax(Int),\n initial_node = nothing,\n)\n\nCreate a Monte Carlo sampler using out-of-sample probabilities and/or supports for the stagewise-independent noise terms, and out-of-sample probabilities for the node-transition matrix.\n\nf is a function that takes the name of a node and returns a tuple containing a vector of new SDDP.Noise terms for the children of that node, and a vector of new SDDP.Noise terms for the stagewise-independent noise.\n\nIf f is called with the name of the root node (e.g., 0 in a linear policy graph, (0, 1) in a Markovian Policy Graph), then return a vector of SDDP.Noise for the children of the root node.\n\nIf use_insample_transition, the in-sample transition probabilities will be used. Therefore, f should only return a vector of the stagewise-independent noise terms, and f will not be called for the root node.\n\nIf terminate_on_cycle, terminate the forward pass once a cycle is detected. If max_depth > 0, return once max_depth nodes have been sampled. If terminate_on_dummy_leaf, terminate the forward pass with 1 - probability of sampling a child node.\n\nNote that if terminate_on_cycle = false and terminate_on_dummy_leaf = false then max_depth must be set > 0.\n\nControl which node the trajectories start from using initial_node. If it is left as nothing, the root node is used as the starting node.\n\nIf a node is deterministic, pass [SDDP.Noise(nothing, 1.0)] as the vector of noise terms.\n\nYou can use rollout_limit to set iteration specific depth limits. For example:\n\nOutOfSampleMonteCarlo(rollout_limit = i -> 2 * i)\n\nExamples\n\nGiven linear policy graph graph with T stages:\n\nsampler = OutOfSampleMonteCarlo(graph) do node\n if node == 0\n return [SDDP.Noise(1, 1.0)]\n else\n noise_terms = [SDDP.Noise(node, 0.3), SDDP.Noise(node + 1, 0.7)]\n children = node < T ? [SDDP.Noise(node + 1, 0.9)] : SDDP.Noise{Int}[]\n return children, noise_terms\n end\nend\n\nGiven linear policy graph graph with T stages:\n\nsampler = OutOfSampleMonteCarlo(graph, use_insample_transition=true) do node\n return [SDDP.Noise(node, 0.3), SDDP.Noise(node + 1, 0.7)]\nend\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.Historical","page":"API Reference","title":"SDDP.Historical","text":"Historical(\n scenarios::Vector{Vector{Tuple{T,S}}},\n probability::Vector{Float64};\n terminate_on_cycle::Bool = false,\n) where {T,S}\n\nA sampling scheme that samples a scenario from the vector of scenarios scenarios according to probability.\n\nExamples\n\nHistorical(\n [\n [(1, 0.5), (2, 1.0), (3, 0.5)],\n [(1, 0.5), (2, 0.0), (3, 1.0)],\n [(1, 1.0), (2, 0.0), (3, 0.0)]\n ],\n [0.2, 0.5, 0.3],\n)\n\n\n\n\n\nHistorical(\n scenarios::Vector{Vector{Tuple{T,S}}};\n terminate_on_cycle::Bool = false,\n) where {T,S}\n\nA deterministic sampling scheme that iterates through the vector of provided scenarios.\n\nExamples\n\nHistorical([\n [(1, 0.5), (2, 1.0), (3, 0.5)],\n [(1, 0.5), (2, 0.0), (3, 1.0)],\n [(1, 1.0), (2, 0.0), (3, 0.0)],\n])\n\n\n\n\n\nHistorical(\n scenario::Vector{Tuple{T,S}};\n terminate_on_cycle::Bool = false,\n) where {T,S}\n\nA deterministic sampling scheme that always samples scenario.\n\nExamples\n\nHistorical([(1, 0.5), (2, 1.5), (3, 0.75)])\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.PSRSamplingScheme","page":"API Reference","title":"SDDP.PSRSamplingScheme","text":"PSRSamplingScheme(N::Int; sampling_scheme = InSampleMonteCarlo())\n\nA sampling scheme with N scenarios, similar to how PSR does it.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.SimulatorSamplingScheme","page":"API Reference","title":"SDDP.SimulatorSamplingScheme","text":"SimulatorSamplingScheme(simulator::Function)\n\nCreate a sampling scheme based on a univariate scenario generator simulator, which returns a Vector{Float64} when called with no arguments like simulator().\n\nThis sampling scheme must be used with a Markovian graph constructed from the same simulator.\n\nThe sample space for SDDP.parameterize must be a tuple with 1 or 2 values, value is the Markov state and the second value is the random variable for the current node. If the node is deterministic, use Ω = [(markov_state,)].\n\nThis sampling scheme generates a new scenario by calling simulator(), and then picking the sequence of nodes in the Markovian graph that is closest to the new trajectory.\n\nExample\n\njulia> using SDDP\n\njulia> import HiGHS\n\njulia> simulator() = cumsum(rand(10))\nsimulator (generic function with 1 method)\n\njulia> model = SDDP.PolicyGraph(\n SDDP.MarkovianGraph(simulator; budget = 20, scenarios = 100);\n sense = :Max,\n upper_bound = 12,\n optimizer = HiGHS.Optimizer,\n ) do sp, node\n t, markov_state = node\n @variable(sp, x >= 0, SDDP.State, initial_value = 1)\n @variable(sp, u >= 0)\n @constraint(sp, x.out == x.in - u)\n # Elements of Ω MUST be a tuple in which `markov_state` is the first\n # element.\n Ω = [(markov_state, (u = u_max,)) for u_max in (0.0, 0.5)]\n SDDP.parameterize(sp, Ω) do (markov_state, ω)\n set_upper_bound(u, ω.u)\n @stageobjective(sp, markov_state * u)\n end\n end;\n\njulia> SDDP.train(\n model;\n print_level = 0,\n iteration_limit = 10,\n sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),\n )\n\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Parallel-schemes","page":"API Reference","title":"Parallel schemes","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.AbstractParallelScheme\nSDDP.Serial\nSDDP.Threaded\nSDDP.Asynchronous","category":"page"},{"location":"apireference/#SDDP.AbstractParallelScheme","page":"API Reference","title":"SDDP.AbstractParallelScheme","text":"AbstractParallelScheme\n\nAbstract type for different parallelism schemes.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.Serial","page":"API Reference","title":"SDDP.Serial","text":"Serial()\n\nRun SDDP in serial mode.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.Threaded","page":"API Reference","title":"SDDP.Threaded","text":"Threaded()\n\nRun SDDP in multi-threaded mode.\n\nUse julia --threads N to start Julia with N threads. In most cases, you should pick N to be the number of physical cores on your machine.\n\ndanger: Danger\nThis plug-in is experimental, and parts of SDDP.jl may not be threadsafe. If you encounter any problems or crashes, please open a GitHub issue.\n\nExample\n\nSDDP.train(model; parallel_scheme = SDDP.Threaded())\nSDDP.simulate(model; parallel_scheme = SDDP.Threaded())\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.Asynchronous","page":"API Reference","title":"SDDP.Asynchronous","text":"Asynchronous(\n [init_callback::Function,]\n slave_pids::Vector{Int} = workers();\n use_master::Bool = true,\n)\n\nRun SDDP in asynchronous mode workers with pid's slave_pids.\n\nAfter initializing the models on each worker, call init_callback(model). Note that init_callback is run locally on the worker and not on the master thread.\n\nIf use_master is true, iterations are also conducted on the master process.\n\n\n\n\n\nAsynchronous(\n solver::Any,\n slave_pids::Vector{Int} = workers();\n use_master::Bool = true,\n)\n\nRun SDDP in asynchronous mode workers with pid's slave_pids.\n\nSet the optimizer on each worker by calling JuMP.set_optimizer(model, solver).\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Forward-passes","page":"API Reference","title":"Forward passes","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.AbstractForwardPass\nSDDP.DefaultForwardPass\nSDDP.RevisitingForwardPass\nSDDP.RiskAdjustedForwardPass\nSDDP.AlternativeForwardPass\nSDDP.AlternativePostIterationCallback\nSDDP.RegularizedForwardPass","category":"page"},{"location":"apireference/#SDDP.AbstractForwardPass","page":"API Reference","title":"SDDP.AbstractForwardPass","text":"AbstractForwardPass\n\nAbstract type for different forward passes.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.DefaultForwardPass","page":"API Reference","title":"SDDP.DefaultForwardPass","text":"DefaultForwardPass(; include_last_node::Bool = true)\n\nThe default forward pass.\n\nIf include_last_node = false and the sample terminated due to a cycle, then the last node (which forms the cycle) is omitted. This can be useful option to set when training, but it comes at the cost of not knowing which node formed the cycle (if there are multiple possibilities).\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.RevisitingForwardPass","page":"API Reference","title":"SDDP.RevisitingForwardPass","text":"RevisitingForwardPass(\n period::Int = 500;\n sub_pass::AbstractForwardPass = DefaultForwardPass(),\n)\n\nA forward pass scheme that generate period new forward passes (using sub_pass), then revisits all previously explored forward passes. This can be useful to encourage convergence at a diversity of points in the state-space.\n\nSet period = typemax(Int) to disable.\n\nFor example, if period = 2, then the forward passes will be revisited as follows: 1, 2, 1, 2, 3, 4, 1, 2, 3, 4, 5, 6, 1, 2, ....\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.RiskAdjustedForwardPass","page":"API Reference","title":"SDDP.RiskAdjustedForwardPass","text":"RiskAdjustedForwardPass(;\n forward_pass::AbstractForwardPass,\n risk_measure::AbstractRiskMeasure,\n resampling_probability::Float64,\n rejection_count::Int = 5,\n)\n\nA forward pass that resamples a previous forward pass with resampling_probability probability, and otherwise samples a new forward pass using forward_pass.\n\nThe forward pass to revisit is chosen based on the risk-adjusted (using risk_measure) probability of the cumulative stage objectives.\n\nNote that this objective corresponds to the first time we visited the trajectory. Subsequent visits may have improved things, but we don't have the mechanisms in-place to update it. Therefore, remove the forward pass from resampling consideration after rejection_count revisits.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.AlternativeForwardPass","page":"API Reference","title":"SDDP.AlternativeForwardPass","text":"AlternativeForwardPass(\n forward_model::SDDP.PolicyGraph{T};\n forward_pass::AbstractForwardPass = DefaultForwardPass(),\n)\n\nA forward pass that simulates using forward_model, which may be different to the model used in the backwards pass.\n\nWhen using this forward pass, you should almost always pass SDDP.AlternativePostIterationCallback to the post_iteration_callback argument of SDDP.train.\n\nThis forward pass is most useful when the forward_model is non-convex and we use a convex approximation of the model in the backward pass.\n\nFor example, in optimal power flow models, we can use an AC-OPF formulation as the forward_model and a DC-OPF formulation as the backward model.\n\nFor more details see the paper:\n\nRosemberg, A., and Street, A., and Garcia, J.D., and Valladão, D.M., and Silva, T., and Dowson, O. (2021). Assessing the cost of network simplifications in long-term hydrothermal dispatch planning models. IEEE Transactions on Sustainable Energy. 13(1), 196-206.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.AlternativePostIterationCallback","page":"API Reference","title":"SDDP.AlternativePostIterationCallback","text":"AlternativePostIterationCallback(forward_model::PolicyGraph)\n\nA post-iteration callback that should be used whenever SDDP.AlternativeForwardPass is used.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.RegularizedForwardPass","page":"API Reference","title":"SDDP.RegularizedForwardPass","text":"RegularizedForwardPass(;\n rho::Float64 = 0.05,\n forward_pass::AbstractForwardPass = DefaultForwardPass(),\n)\n\nA forward pass that regularizes the outgoing first-stage state variables with an L-infty trust-region constraint about the previous iteration's solution. Specifically, the bounds of the outgoing state variable x are updated from (l, u) to max(l, x^k - rho * (u - l)) <= x <= min(u, x^k + rho * (u - l)), where x^k is the optimal solution of x in the previous iteration. On the first iteration, the value of the state at the root node is used.\n\nBy default, rho is set to 5%, which seems to work well empirically.\n\nPass a different forward_pass to control the forward pass within the regularized forward pass.\n\nThis forward pass is largely intended to be used for investment problems in which the first stage makes a series of capacity decisions that then influence the rest of the graph. An error is thrown if the first stage problem is not deterministic, and states are silently skipped if they do not have finite bounds.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Risk-Measures","page":"API Reference","title":"Risk Measures","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.AbstractRiskMeasure\nSDDP.adjust_probability","category":"page"},{"location":"apireference/#SDDP.AbstractRiskMeasure","page":"API Reference","title":"SDDP.AbstractRiskMeasure","text":"AbstractRiskMeasure\n\nThe abstract type for the risk measure interface.\n\nYou need to define the following methods:\n\nSDDP.adjust_probability\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.adjust_probability","page":"API Reference","title":"SDDP.adjust_probability","text":"adjust_probability(\n measure::Expectation\n risk_adjusted_probability::Vector{Float64},\n original_probability::Vector{Float64},\n noise_support::Vector{Noise{T}},\n objective_realizations::Vector{Float64},\n is_minimization::Bool,\n) where {T}\n\n\n\n\n\n","category":"function"},{"location":"apireference/#Duality-handlers","page":"API Reference","title":"Duality handlers","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.AbstractDualityHandler\nSDDP.ContinuousConicDuality\nSDDP.LagrangianDuality\nSDDP.StrengthenedConicDuality\nSDDP.BanditDuality","category":"page"},{"location":"apireference/#SDDP.AbstractDualityHandler","page":"API Reference","title":"SDDP.AbstractDualityHandler","text":"AbstractDualityHandler\n\nThe abstract type for the duality handler interface.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.ContinuousConicDuality","page":"API Reference","title":"SDDP.ContinuousConicDuality","text":"ContinuousConicDuality()\n\nCompute dual variables in the backward pass using conic duality, relaxing any binary or integer restrictions as necessary.\n\nTheory\n\nGiven the problem\n\nmin Cᵢ(x̄, u, w) + θᵢ\n st (x̄, x′, u) in Xᵢ(w) ∩ S\n x̄ - x == 0 [λ]\n\nwhere S ⊆ ℝ×ℤ, we relax integrality and using conic duality to solve for λ in the problem:\n\nmin Cᵢ(x̄, u, w) + θᵢ\n st (x̄, x′, u) in Xᵢ(w)\n x̄ - x == 0 [λ]\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.LagrangianDuality","page":"API Reference","title":"SDDP.LagrangianDuality","text":"LagrangianDuality(;\n method::LocalImprovementSearch.AbstractSearchMethod =\n LocalImprovementSearch.BFGS(100),\n)\n\nObtain dual variables in the backward pass using Lagrangian duality.\n\nArguments\n\nmethod: the LocalImprovementSearch method for maximizing the Lagrangian dual problem.\n\nTheory\n\nGiven the problem\n\nmin Cᵢ(x̄, u, w) + θᵢ\n st (x̄, x′, u) in Xᵢ(w) ∩ S\n x̄ - x == 0 [λ]\n\nwhere S ⊆ ℝ×ℤ, we solve the problem max L(λ), where:\n\nL(λ) = min Cᵢ(x̄, u, w) + θᵢ - λ' h(x̄)\n st (x̄, x′, u) in Xᵢ(w) ∩ S\n\nand where h(x̄) = x̄ - x.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.StrengthenedConicDuality","page":"API Reference","title":"SDDP.StrengthenedConicDuality","text":"StrengthenedConicDuality()\n\nObtain dual variables in the backward pass using strengthened conic duality.\n\nTheory\n\nGiven the problem\n\nmin Cᵢ(x̄, u, w) + θᵢ\n st (x̄, x′, u) in Xᵢ(w) ∩ S\n x̄ - x == 0 [λ]\n\nwe first obtain an estimate for λ using ContinuousConicDuality.\n\nThen, we evaluate the Lagrangian function:\n\nL(λ) = min Cᵢ(x̄, u, w) + θᵢ - λ' (x̄ - x`)\n st (x̄, x′, u) in Xᵢ(w) ∩ S\n\nto obtain a better estimate of the intercept.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.BanditDuality","page":"API Reference","title":"SDDP.BanditDuality","text":"BanditDuality()\n\nFormulates the problem of choosing a duality handler as a multi-armed bandit problem. The arms to choose between are:\n\nContinuousConicDuality\nStrengthenedConicDuality\nLagrangianDuality\n\nOur problem isn't a typical multi-armed bandit for a two reasons:\n\nThe reward distribution is non-stationary (each arm converges to 0 as it keeps getting pulled.\nThe distribution of rewards is dependent on the history of the arms that were chosen.\n\nWe choose a very simple heuristic: pick the arm with the best mean + 1 standard deviation. That should ensure we consistently pick the arm with the best likelihood of improving the value function.\n\nIn future, we should consider discounting the rewards of earlier iterations, and focus more on the more-recent rewards.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#Simulating-the-policy","page":"API Reference","title":"Simulating the policy","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.simulate\nSDDP.calculate_bound\nSDDP.add_all_cuts","category":"page"},{"location":"apireference/#SDDP.simulate","page":"API Reference","title":"SDDP.simulate","text":"simulate(\n model::PolicyGraph,\n number_replications::Int = 1,\n variables::Vector{Symbol} = Symbol[];\n sampling_scheme::AbstractSamplingScheme =\n InSampleMonteCarlo(),\n custom_recorders = Dict{Symbol, Function}(),\n duality_handler::Union{Nothing,AbstractDualityHandler} = nothing,\n skip_undefined_variables::Bool = false,\n parallel_scheme::AbstractParallelScheme = Serial(),\n incoming_state::Dict{String,Float64} = _initial_state(model),\n )::Vector{Vector{Dict{Symbol,Any}}}\n\nPerform a simulation of the policy model with number_replications replications.\n\nReturn data structure\n\nReturns a vector with one element for each replication. Each element is a vector with one-element for each node in the scenario that was sampled. Each element in that vector is a dictionary containing information about the subproblem that was solved.\n\nIn that dictionary there are four special keys:\n\n:node_index, which records the index of the sampled node in the policy model\n:noise_term, which records the noise observed at the node\n:stage_objective, which records the stage-objective of the subproblem\n:bellman_term, which records the cost/value-to-go of the node.\n\nThe sum of :stage_objective + :bellman_term will equal the objective value of the solved subproblem.\n\nIn addition to the special keys, the dictionary will contain the result of key => JuMP.value(subproblem[key]) for each key in variables. This is useful to obtain the primal value of the state and control variables.\n\nPositonal arguments\n\nmodel: the model to simulate\nnumber_replications::Int = 1: the number of simulation replications to conduct, that is, the length of the simulation vector that is returned by this function. If omitted, this defaults to 1.`\nvariables::Vector{Symbol} = Symbol[]: a list of the variable names to record the value of in each stage.\n\nKeyword arguments\n\nsampling_scheme: the sampling scheme used when simulating.\ncustom_recorders: see Custom recorders section below.\nduality_handler: the SDDP.AbstractDualityHandler used to compute dual variables. If you do not require dual variables (or if they are not available), pass duality_handler = nothing.\nskip_undefined_variables: If you attempt to simulate the value of a variable that is only defined in some of the stage problems, an error will be thrown. To over-ride this (and return a NaN instead), pass skip_undefined_variables = true.\nparallel_scheme: Use parallel_scheme::[AbstractParallelScheme](@ref) to specify a scheme for simulating in parallel. Defaults to Serial.\ninitial_state: Use incoming_state to pass an initial value of the state variable, if it differs from that at the root node. Each key should be the string name of the state variable.\n\nCustom recorders\n\nFor more complicated data, the custom_recorders keyword argument can be used.\n\nFor example, to record the dual of a constraint named my_constraint, pass the following:\n\nsimulation_results = SDDP.simulate(model, 2;\n custom_recorders = Dict{Symbol, Function}(\n :constraint_dual => sp -> JuMP.dual(sp[:my_constraint])\n )\n)\n\nThe value of the dual in the first stage of the second replication can be accessed as:\n\nsimulation_results[2][1][:constraint_dual]\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.calculate_bound","page":"API Reference","title":"SDDP.calculate_bound","text":"SDDP.calculate_bound(\n model::PolicyGraph,\n state::Dict{Symbol,Float64} = model.initial_root_state;\n risk_measure::AbstractRiskMeasure = Expectation(),\n)\n\nCalculate the lower bound (if minimizing, otherwise upper bound) of the problem model at the point state, assuming the risk measure at the root node is risk_measure.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.add_all_cuts","page":"API Reference","title":"SDDP.add_all_cuts","text":"add_all_cuts(model::PolicyGraph)\n\nAdd all cuts that may have been deleted back into the model.\n\nExplanation\n\nDuring the solve, SDDP.jl may decide to remove cuts for a variety of reasons.\n\nThese can include cuts that define the optimal value function, particularly around the extremes of the state-space (e.g., reservoirs empty).\n\nThis function ensures that all cuts discovered are added back into the model.\n\nYou should call this after train and before simulate.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#Decision-rules","page":"API Reference","title":"Decision rules","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.DecisionRule\nSDDP.evaluate","category":"page"},{"location":"apireference/#SDDP.DecisionRule","page":"API Reference","title":"SDDP.DecisionRule","text":"DecisionRule(model::PolicyGraph{T}; node::T)\n\nCreate a decision rule for node node in model.\n\nExample\n\nrule = SDDP.DecisionRule(model; node = 1)\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.evaluate","page":"API Reference","title":"SDDP.evaluate","text":"evaluate(\n rule::DecisionRule;\n incoming_state::Dict{Symbol,Float64},\n noise = nothing,\n controls_to_record = Symbol[],\n)\n\nEvalute the decision rule rule at the point described by the incoming_state and noise.\n\nIf the node is deterministic, omit the noise argument.\n\nPass a list of symbols to controls_to_record to save the optimal primal solution corresponding to the names registered in the model.\n\n\n\n\n\nevaluate(\n V::ValueFunction,\n point::Dict{Union{Symbol,String},<:Real}\n objective_state = nothing,\n belief_state = nothing\n)\n\nEvaluate the value function V at point in the state-space.\n\nReturns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.\n\nExamples\n\nevaluate(V, Dict(:volume => 1.0))\n\nIf the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:\n\nevaluate(V, Dict(Symbol(\"volume[1]\") => 1.0))\n\nYou can also use strings or symbols for the keys.\n\nevaluate(V, Dict(\"volume[1]\" => 1))\n\n\n\n\n\nevalute(V::ValueFunction{Nothing, Nothing}; kwargs...)\n\nEvalute the value function V at the point in the state-space specified by kwargs.\n\nExamples\n\nevaluate(V; volume = 1)\n\n\n\n\n\nevaluate(\n model::PolicyGraph{T},\n validation_scenarios::ValidationScenarios{T,S},\n) where {T,S}\n\nEvaluate the performance of the policy contained in model after a call to train on the scenarios specified by validation_scenarios.\n\nExamples\n\nmodel, validation_scenarios = read_from_file(\"my_model.sof.json\")\ntrain(model; iteration_limit = 100)\nsimulations = evaluate(model, validation_scenarios)\n\n\n\n\n\n","category":"function"},{"location":"apireference/#Visualizing-the-policy","page":"API Reference","title":"Visualizing the policy","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.SpaghettiPlot\nSDDP.add_spaghetti\nSDDP.publication_plot\nSDDP.ValueFunction\nSDDP.evaluate(::SDDP.ValueFunction, ::Dict{Symbol,Float64})\nSDDP.plot","category":"page"},{"location":"apireference/#SDDP.SpaghettiPlot","page":"API Reference","title":"SDDP.SpaghettiPlot","text":"SDDP.SpaghettiPlot(; stages, scenarios)\n\nInitialize a new SpaghettiPlot with stages stages and scenarios number of replications.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.add_spaghetti","page":"API Reference","title":"SDDP.add_spaghetti","text":"SDDP.add_spaghetti(data_function::Function, plt::SpaghettiPlot; kwargs...)\n\nDescription\n\nAdd a new figure to the SpaghettiPlot plt, where the y-value of the scenarioth line when x = stage is given by data_function(plt.simulations[scenario][stage]).\n\nKeyword arguments\n\nxlabel: set the xaxis label\nylabel: set the yaxis label\ntitle: set the title of the plot\nymin: set the minimum y value\nymax: set the maximum y value\ncumulative: plot the additive accumulation of the value across the stages\ninterpolate: interpolation method for lines between stages.\n\nDefaults to \"linear\" see the d3 docs \tfor all options.\n\nExamples\n\nsimulations = simulate(model, 10)\nplt = SDDP.spaghetti_plot(simulations)\nSDDP.add_spaghetti(plt; title = \"Stage objective\") do data\n return data[:stage_objective]\nend\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.publication_plot","page":"API Reference","title":"SDDP.publication_plot","text":"SDDP.publication_plot(\n data_function, simulations;\n quantile = [0.0, 0.1, 0.25, 0.5, 0.75, 0.9, 1.0],\n kwargs...)\n\nCreate a Plots.jl recipe plot of the simulations.\n\nSee Plots.jl for the list of keyword arguments.\n\nExamples\n\nSDDP.publication_plot(simulations; title = \"My title\") do data\n return data[:stage_objective]\nend\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.ValueFunction","page":"API Reference","title":"SDDP.ValueFunction","text":"ValueFunction\n\nA representation of the value function. SDDP.jl uses the following unique representation of the value function that is undocumented in the literature.\n\nIt supports three types of state variables:\n\nx - convex \"resource\" states\nb - concave \"belief\" states\ny - concave \"objective\" states\n\nIn addition, we have three types of cuts:\n\nSingle-cuts (also called \"average\" cuts in the literature), which involve the risk-adjusted expectation of the cost-to-go.\nMulti-cuts, which use a different cost-to-go term for each realization w.\nRisk-cuts, which correspond to the facets of the dual interpretation of a coherent risk measure.\n\nTherefore, ValueFunction returns a JuMP model of the following form:\n\nV(x, b, y) = min: μᵀb + νᵀy + θ\n s.t. # \"Single\" / \"Average\" cuts\n μᵀb(j) + νᵀy(j) + θ >= α(j) + xᵀβ(j), ∀ j ∈ J\n # \"Multi\" cuts\n μᵀb(k) + νᵀy(k) + φ(w) >= α(k, w) + xᵀβ(k, w), ∀w ∈ Ω, k ∈ K\n # \"Risk-set\" cuts\n θ ≥ Σ{p(k, w) * φ(w)}_w - μᵀb(k) - νᵀy(k), ∀ k ∈ K\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.evaluate-Tuple{SDDP.ValueFunction, Dict{Symbol, Float64}}","page":"API Reference","title":"SDDP.evaluate","text":"evaluate(\n V::ValueFunction,\n point::Dict{Union{Symbol,String},<:Real}\n objective_state = nothing,\n belief_state = nothing\n)\n\nEvaluate the value function V at point in the state-space.\n\nReturns a tuple containing the height of the function, and the subgradient w.r.t. the convex state-variables.\n\nExamples\n\nevaluate(V, Dict(:volume => 1.0))\n\nIf the state variable is constructed like @variable(sp, volume[1:4] >= 0, SDDP.State, initial_value = 0.0), use [i] to index the state variable:\n\nevaluate(V, Dict(Symbol(\"volume[1]\") => 1.0))\n\nYou can also use strings or symbols for the keys.\n\nevaluate(V, Dict(\"volume[1]\" => 1))\n\n\n\n\n\n","category":"method"},{"location":"apireference/#SDDP.plot","page":"API Reference","title":"SDDP.plot","text":"plot(plt::SpaghettiPlot[, filename::String]; open::Bool = true)\n\nThe SpaghettiPlot plot plt to filename. If filename is not given, it will be saved to a temporary directory. If open = true, then a browser window will be opened to display the resulting HTML file.\n\n\n\n\n\n","category":"function"},{"location":"apireference/#Debugging-the-model","page":"API Reference","title":"Debugging the model","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.write_subproblem_to_file\nSDDP.deterministic_equivalent","category":"page"},{"location":"apireference/#SDDP.write_subproblem_to_file","page":"API Reference","title":"SDDP.write_subproblem_to_file","text":"write_subproblem_to_file(\n node::Node,\n filename::String;\n throw_error::Bool = false,\n)\n\nWrite the subproblem contained in node to the file filename.\n\nThe throw_error is an argument used internally by SDDP.jl. If set, an error will be thrown.\n\nExample\n\nSDDP.write_subproblem_to_file(model[1], \"subproblem_1.lp\")\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.deterministic_equivalent","page":"API Reference","title":"SDDP.deterministic_equivalent","text":"deterministic_equivalent(\n pg::PolicyGraph{T},\n optimizer = nothing;\n time_limit::Union{Real,Nothing} = 60.0,\n)\n\nForm a JuMP model that represents the deterministic equivalent of the problem.\n\nExamples\n\ndeterministic_equivalent(model)\n\ndeterministic_equivalent(model, HiGHS.Optimizer)\n\n\n\n\n\n","category":"function"},{"location":"apireference/#StochOptFormat","page":"API Reference","title":"StochOptFormat","text":"","category":"section"},{"location":"apireference/","page":"API Reference","title":"API Reference","text":"SDDP.write_to_file\nSDDP.read_from_file\nBase.write(::IO, ::SDDP.PolicyGraph)\nBase.read(::IO, ::Type{SDDP.PolicyGraph})\nSDDP.evaluate(::SDDP.PolicyGraph{T}, ::SDDP.ValidationScenarios{T}) where {T}\nSDDP.ValidationScenarios\nSDDP.ValidationScenario","category":"page"},{"location":"apireference/#SDDP.write_to_file","page":"API Reference","title":"SDDP.write_to_file","text":"write_to_file(\n model::PolicyGraph,\n filename::String;\n compression::MOI.FileFormats.AbstractCompressionScheme =\n MOI.FileFormats.AutomaticCompression(),\n kwargs...\n)\n\nWrite model to filename in the StochOptFormat file format.\n\nPass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.\n\nSee Base.write(::IO, ::PolicyGraph) for information on the keyword arguments that can be provided.\n\nwarning: Warning\nThis function is experimental. See the full warning in Base.write(::IO, ::PolicyGraph).\n\nExamples\n\nwrite_to_file(model, \"my_model.sof.json\"; validation_scenarios = 10)\n\n\n\n\n\n","category":"function"},{"location":"apireference/#SDDP.read_from_file","page":"API Reference","title":"SDDP.read_from_file","text":"read_from_file(\n filename::String;\n compression::MOI.FileFormats.AbstractCompressionScheme =\n MOI.FileFormats.AutomaticCompression(),\n kwargs...\n)::Tuple{PolicyGraph, ValidationScenarios}\n\nReturn a tuple containing a PolicyGraph object and a ValidationScenarios read from filename in the StochOptFormat file format.\n\nPass an argument to compression to override the default of automatically detecting the file compression to use based on the extension of filename.\n\nSee Base.read(::IO, ::Type{PolicyGraph}) for information on the keyword arguments that can be provided.\n\nwarning: Warning\nThis function is experimental. See the full warning in Base.read(::IO, ::Type{PolicyGraph}).\n\nExamples\n\nmodel, validation_scenarios = read_from_file(\"my_model.sof.json\")\n\n\n\n\n\n","category":"function"},{"location":"apireference/#Base.write-Tuple{IO, SDDP.PolicyGraph}","page":"API Reference","title":"Base.write","text":"Base.write(\n io::IO,\n model::PolicyGraph;\n validation_scenarios::Union{Nothing,Int,ValidationScenarios} = nothing,\n sampling_scheme::AbstractSamplingScheme = InSampleMonteCarlo(),\n kwargs...\n)\n\nWrite model to io in the StochOptFormat file format.\n\nPass an Int to validation_scenarios (default nothing) to specify the number of test scenarios to generate using the sampling_scheme sampling scheme. Alternatively, pass a ValidationScenarios object to manually specify the test scenarios to use.\n\nAny additional kwargs passed to write will be stored in the top-level of the resulting StochOptFormat file. Valid arguments include name, author, date, and description.\n\nCompatibility\n\nwarning: Warning\nTHIS FUNCTION IS EXPERIMENTAL. THINGS MAY CHANGE BETWEEN COMMITS. YOU SHOULD NOT RELY ON THIS FUNCTIONALITY AS A LONG-TERM FILE FORMAT (YET).\n\nIn addition to potential changes to the underlying format, only a subset of possible modifications are supported. These include:\n\nJuMP.fix\nJuMP.set_lower_bound\nJuMP.set_upper_bound\nJuMP.set_normalized_rhs\nChanges to the constant or affine terms in a stage objective.\n\nIf your model uses something other than this, this function will silently write an incorrect formulation of the problem.\n\nExamples\n\nopen(\"my_model.sof.json\", \"w\") do io\n write(\n io,\n model;\n validation_scenarios = 10,\n name = \"MyModel\",\n author = \"@odow\",\n date = \"2020-07-20\",\n description = \"Example problem for the SDDP.jl documentation\",\n )\nend\n\n\n\n\n\n","category":"method"},{"location":"apireference/#Base.read-Tuple{IO, Type{SDDP.PolicyGraph}}","page":"API Reference","title":"Base.read","text":"Base.read(\n io::IO,\n ::Type{PolicyGraph};\n bound::Float64 = 1e6,\n)::Tuple{PolicyGraph,ValidationScenarios}\n\nReturn a tuple containing a PolicyGraph object and a ValidationScenarios read from io in the StochOptFormat file format.\n\nSee also: evaluate.\n\nCompatibility\n\nwarning: Warning\nThis function is experimental. Things may change between commits. You should not rely on this functionality as a long-term file format (yet).\n\nIn addition to potential changes to the underlying format, only a subset of possible modifications are supported. These include:\n\nAdditive random variables in the constraints or in the objective\nMultiplicative random variables in the objective\n\nIf your model uses something other than this, this function may throw an error or silently build a non-convex model.\n\nExamples\n\nopen(\"my_model.sof.json\", \"r\") do io\n model, validation_scenarios = read(io, PolicyGraph)\nend\n\n\n\n\n\n","category":"method"},{"location":"apireference/#SDDP.evaluate-Union{Tuple{T}, Tuple{SDDP.PolicyGraph{T}, SDDP.ValidationScenarios{T}}} where T","page":"API Reference","title":"SDDP.evaluate","text":"evaluate(\n model::PolicyGraph{T},\n validation_scenarios::ValidationScenarios{T,S},\n) where {T,S}\n\nEvaluate the performance of the policy contained in model after a call to train on the scenarios specified by validation_scenarios.\n\nExamples\n\nmodel, validation_scenarios = read_from_file(\"my_model.sof.json\")\ntrain(model; iteration_limit = 100)\nsimulations = evaluate(model, validation_scenarios)\n\n\n\n\n\n","category":"method"},{"location":"apireference/#SDDP.ValidationScenarios","page":"API Reference","title":"SDDP.ValidationScenarios","text":"ValidationScenario{T,S}(scenarios::Vector{ValidationScenario{T,S}})\n\nAn AbstractSamplingScheme based on a vector of scenarios.\n\nEach scenario is a vector of Tuple{T, S} where the first element is the node to visit and the second element is the realization of the stagewise-independent noise term. Pass nothing if the node is deterministic.\n\n\n\n\n\n","category":"type"},{"location":"apireference/#SDDP.ValidationScenario","page":"API Reference","title":"SDDP.ValidationScenario","text":"ValidationScenario{T,S}(scenario::Vector{Tuple{T,S}})\n\nA single scenario for testing.\n\nSee also: ValidationScenarios.\n\n\n\n\n\n","category":"type"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"EditURL = \"markov_uncertainty.jl\"","category":"page"},{"location":"tutorial/markov_uncertainty/#Markovian-policy-graphs","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"","category":"section"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"In our previous tutorials (An introduction to SDDP.jl and Uncertainty in the objective function), we formulated a simple hydrothermal scheduling problem with stagewise-independent random variables in the right-hand side of the constraints and in the objective function. Now, in this tutorial, we introduce some stagewise-dependent uncertainty using a Markov chain.","category":"page"},{"location":"tutorial/markov_uncertainty/#Formulating-the-problem","page":"Markovian policy graphs","title":"Formulating the problem","text":"","category":"section"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"In this tutorial we consider a Markov chain with two climate states: wet and dry. Each Markov state is associated with an integer, in this case the wet climate state is Markov state 1 and the dry climate state is Markov state 2. In the wet climate state, the probability of the high inflow increases to 50%, and the probability of the low inflow decreases to 1/6. In the dry climate state, the converse happens. There is also persistence in the climate state: the probability of remaining in the current state is 75%, and the probability of transitioning to the other climate state is 25%. We assume that the first stage starts in the wet climate state.","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"Here is a picture of the model we're going to implement.","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"(Image: Markovian policy graph)","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"There are five nodes in our graph. Each node is named by a tuple (t, i), where t is the stage for t=1,2,3, and i is the Markov state for i=1,2. As before, the wavy lines denote the stagewise-independent random variable.","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"For each stage, we need to provide a Markov transition matrix. This is an MxN matrix, where the element A[i, j] gives the probability of transitioning from Markov state i in the previous stage to Markov state j in the current stage. The first stage is special because we assume there is a \"zero'th\" stage which has one Markov state (the round node in the graph above). Furthermore, the number of columns in the transition matrix of a stage (i.e. the number of Markov states) must equal the number of rows in the next stage's transition matrix. For our example, the vector of Markov transition matrices is given by:","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"T = Array{Float64,2}[[1.0]', [0.75 0.25], [0.75 0.25; 0.25 0.75]]","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"note: Note\nMake sure to add the ' after the first transition matrix so Julia can distinguish between a vector and a matrix.","category":"page"},{"location":"tutorial/markov_uncertainty/#Creating-a-model","page":"Markovian policy graphs","title":"Creating a model","text":"","category":"section"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"using SDDP, HiGHS\n\nΩ = [\n (inflow = 0.0, fuel_multiplier = 1.5),\n (inflow = 50.0, fuel_multiplier = 1.0),\n (inflow = 100.0, fuel_multiplier = 0.75),\n]\n\nmodel = SDDP.MarkovianPolicyGraph(;\n transition_matrices = Array{Float64,2}[\n [1.0]',\n [0.75 0.25],\n [0.75 0.25; 0.25 0.75],\n ],\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, node\n # Unpack the stage and Markov index.\n t, markov_state = node\n # Define the state variable.\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Define the control variables.\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n # Define the constraints\n @constraints(\n subproblem,\n begin\n volume.out == volume.in + inflow - hydro_generation - hydro_spill\n thermal_generation + hydro_generation == 150.0\n end\n )\n # Note how we can use `markov_state` to dispatch an `if` statement.\n probability = if markov_state == 1 # wet climate state\n [1 / 6, 1 / 3, 1 / 2]\n else # dry climate state\n [1 / 2, 1 / 3, 1 / 6]\n end\n\n fuel_cost = [50.0, 100.0, 150.0]\n SDDP.parameterize(subproblem, Ω, probability) do ω\n JuMP.fix(inflow, ω.inflow)\n @stageobjective(\n subproblem,\n ω.fuel_multiplier * fuel_cost[t] * thermal_generation\n )\n end\nend","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"tip: Tip\nFor more information on SDDP.MarkovianPolicyGraphs, read Create a general policy graph.","category":"page"},{"location":"tutorial/markov_uncertainty/#Training-and-simulating-the-policy","page":"Markovian policy graphs","title":"Training and simulating the policy","text":"","category":"section"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"As in the previous three tutorials, we train the policy:","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"SDDP.train(model)","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"Instead of performing a Monte Carlo simulation like the previous tutorials, we may want to simulate one particular sequence of noise realizations. This historical simulation can also be conducted by passing a SDDP.Historical sampling scheme to the sampling_scheme keyword of the SDDP.simulate function.","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"We can confirm that the historical sequence of nodes was visited by querying the :node_index key of the simulation results.","category":"page"},{"location":"tutorial/markov_uncertainty/","page":"Markovian policy graphs","title":"Markovian policy graphs","text":"simulations = SDDP.simulate(\n model;\n sampling_scheme = SDDP.Historical([\n ((1, 1), Ω[1]),\n ((2, 2), Ω[3]),\n ((3, 1), Ω[2]),\n ]),\n)\n\n[stage[:node_index] for stage in simulations[1]]","category":"page"},{"location":"examples/FAST_hydro_thermal/","page":"FAST: the hydro-thermal problem","title":"FAST: the hydro-thermal problem","text":"EditURL = \"FAST_hydro_thermal.jl\"","category":"page"},{"location":"examples/FAST_hydro_thermal/#FAST:-the-hydro-thermal-problem","page":"FAST: the hydro-thermal problem","title":"FAST: the hydro-thermal problem","text":"","category":"section"},{"location":"examples/FAST_hydro_thermal/","page":"FAST: the hydro-thermal problem","title":"FAST: the hydro-thermal problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/FAST_hydro_thermal/","page":"FAST: the hydro-thermal problem","title":"FAST: the hydro-thermal problem","text":"An implementation of the Hydro-thermal example from FAST","category":"page"},{"location":"examples/FAST_hydro_thermal/","page":"FAST: the hydro-thermal problem","title":"FAST: the hydro-thermal problem","text":"using SDDP, HiGHS, Test\n\nfunction fast_hydro_thermal()\n model = SDDP.LinearPolicyGraph(;\n stages = 2,\n upper_bound = 0.0,\n sense = :Max,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n @variable(sp, 0 <= x <= 8, SDDP.State, initial_value = 0.0)\n @variables(sp, begin\n y >= 0\n p >= 0\n ξ\n end)\n @constraints(sp, begin\n p + y >= 6\n x.out <= x.in - y + ξ\n end)\n RAINFALL = (t == 1 ? [6] : [2, 10])\n SDDP.parameterize(sp, RAINFALL) do ω\n return JuMP.fix(ξ, ω)\n end\n @stageobjective(sp, -5 * p)\n end\n\n det = SDDP.deterministic_equivalent(model, HiGHS.Optimizer)\n set_silent(det)\n JuMP.optimize!(det)\n @test JuMP.objective_sense(det) == MOI.MAX_SENSE\n @test JuMP.objective_value(det) == -10\n SDDP.train(model)\n @test SDDP.calculate_bound(model) == -10\n return\nend\n\nfast_hydro_thermal()","category":"page"},{"location":"examples/StochDynamicProgramming.jl_multistock/","page":"StochDynamicProgramming: the multistock problem","title":"StochDynamicProgramming: the multistock problem","text":"EditURL = \"StochDynamicProgramming.jl_multistock.jl\"","category":"page"},{"location":"examples/StochDynamicProgramming.jl_multistock/#StochDynamicProgramming:-the-multistock-problem","page":"StochDynamicProgramming: the multistock problem","title":"StochDynamicProgramming: the multistock problem","text":"","category":"section"},{"location":"examples/StochDynamicProgramming.jl_multistock/","page":"StochDynamicProgramming: the multistock problem","title":"StochDynamicProgramming: the multistock problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/StochDynamicProgramming.jl_multistock/","page":"StochDynamicProgramming: the multistock problem","title":"StochDynamicProgramming: the multistock problem","text":"This example comes from StochDynamicProgramming.jl.","category":"page"},{"location":"examples/StochDynamicProgramming.jl_multistock/","page":"StochDynamicProgramming: the multistock problem","title":"StochDynamicProgramming: the multistock problem","text":"using SDDP, HiGHS, Test\n\nfunction test_multistock_example()\n model = SDDP.LinearPolicyGraph(;\n stages = 5,\n lower_bound = -5.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, stage\n @variable(\n subproblem,\n 0 <= stock[i = 1:3] <= 1,\n SDDP.State,\n initial_value = 0.5\n )\n @variables(subproblem, begin\n 0 <= control[i = 1:3] <= 0.5\n ξ[i = 1:3] # Dummy for RHS noise.\n end)\n @constraints(\n subproblem,\n begin\n sum(control) - 0.5 * 3 <= 0\n [i = 1:3], stock[i].out == stock[i].in + control[i] - ξ[i]\n end\n )\n Ξ = collect(\n Base.product((0.0, 0.15, 0.3), (0.0, 0.15, 0.3), (0.0, 0.15, 0.3)),\n )[:]\n SDDP.parameterize(subproblem, Ξ) do ω\n return JuMP.fix.(ξ, ω)\n end\n @stageobjective(subproblem, (sin(3 * stage) - 1) * sum(control))\n end\n SDDP.train(\n model;\n iteration_limit = 100,\n cut_type = SDDP.SINGLE_CUT,\n log_frequency = 10,\n )\n @test SDDP.calculate_bound(model) ≈ -4.349 atol = 0.01\n\n simulation_results = SDDP.simulate(model, 5000)\n @test length(simulation_results) == 5000\n μ = SDDP.Statistics.mean(\n sum(data[:stage_objective] for data in simulation) for\n simulation in simulation_results\n )\n @test μ ≈ -4.349 atol = 0.1\n return\nend\n\ntest_multistock_example()","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"EditURL = \"plotting.jl\"","category":"page"},{"location":"tutorial/plotting/#Plotting-tools","page":"Plotting tools","title":"Plotting tools","text":"","category":"section"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"In our previous tutorials, we formulated, solved, and simulated multistage stochastic optimization problems. However, we haven't really investigated what the solution looks like. Luckily, SDDP.jl includes a number of plotting tools to help us do that. In this tutorial, we explain the tools and make some pretty pictures.","category":"page"},{"location":"tutorial/plotting/#Preliminaries","page":"Plotting tools","title":"Preliminaries","text":"","category":"section"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"The next two plot types help visualize the policy. Thus, we first need to create a policy and simulate some trajectories. So, let's take the model from Markovian policy graphs, train it for 20 iterations, and then simulate 100 Monte Carlo realizations of the policy.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"using SDDP, HiGHS\n\nΩ = [\n (inflow = 0.0, fuel_multiplier = 1.5),\n (inflow = 50.0, fuel_multiplier = 1.0),\n (inflow = 100.0, fuel_multiplier = 0.75),\n]\n\nmodel = SDDP.MarkovianPolicyGraph(;\n transition_matrices = Array{Float64,2}[\n [1.0]',\n [0.75 0.25],\n [0.75 0.25; 0.25 0.75],\n ],\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, node\n t, markov_state = node\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n @constraints(\n subproblem,\n begin\n volume.out == volume.in + inflow - hydro_generation - hydro_spill\n thermal_generation + hydro_generation == 150.0\n end\n )\n probability =\n markov_state == 1 ? [1 / 6, 1 / 3, 1 / 2] : [1 / 2, 1 / 3, 1 / 6]\n fuel_cost = [50.0, 100.0, 150.0]\n SDDP.parameterize(subproblem, Ω, probability) do ω\n JuMP.fix(inflow, ω.inflow)\n @stageobjective(\n subproblem,\n ω.fuel_multiplier * fuel_cost[t] * thermal_generation\n )\n end\nend\n\nSDDP.train(model; iteration_limit = 20, run_numerical_stability_report = false)\n\nsimulations = SDDP.simulate(\n model,\n 100,\n [:volume, :thermal_generation, :hydro_generation, :hydro_spill],\n)\n\nprintln(\"Completed $(length(simulations)) simulations.\")","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"Great! Now we have some data in simulations to visualize.","category":"page"},{"location":"tutorial/plotting/#Spaghetti-plots","page":"Plotting tools","title":"Spaghetti plots","text":"","category":"section"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"The first plotting utility we discuss is a spaghetti plot (you'll understand the name when you see the graph).","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"To create a spaghetti plot, begin by creating a new SDDP.SpaghettiPlot instance as follows:","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"plt = SDDP.SpaghettiPlot(simulations)","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"We can add plots to plt using the SDDP.add_spaghetti function.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.add_spaghetti(plt; title = \"Reservoir volume\") do data\n return data[:volume].out\nend","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"In addition to returning values from the simulation, you can compute things:","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.add_spaghetti(plt; title = \"Fuel cost\", ymin = 0, ymax = 250) do data\n if data[:thermal_generation] > 0\n return data[:stage_objective] / data[:thermal_generation]\n else # No thermal generation, so return 0.0.\n return 0.0\n end\nend","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"Note that there are many keyword arguments in addition to title. For example, we fixed the minimum and maximum values of the y-axis using ymin and ymax. See the SDDP.add_spaghetti documentation for all the arguments.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"Having built the plot, we now need to display it using SDDP.plot.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.plot(plt, \"spaghetti_plot.html\")","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"This should open a webpage that looks like this one.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"Using the mouse, you can highlight individual trajectories by hovering over them. This makes it possible to visualize a single trajectory across multiple dimensions.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"If you click on the plot, then trajectories that are close to the mouse pointer are shown darker and those further away are shown lighter.","category":"page"},{"location":"tutorial/plotting/#Publication-plots","page":"Plotting tools","title":"Publication plots","text":"","category":"section"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"Instead of the interactive Javascript plots, you can also create some publication ready plots using the SDDP.publication_plot function.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"info: Info\nYou need to install the Plots.jl package for this to work. We used the GR backend (gr()), but any Plots.jl backend should work.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.publication_plot implements a plot recipe to create ribbon plots of each variable against the stages. The first argument is the vector of simulation dictionaries and the second argument is the dictionary key that you want to plot. Standard Plots.jl keyword arguments such as title and xlabel can be used to modify the look of each plot. By default, the plot displays ribbons of the 0-100, 10-90, and 25-75 percentiles. The dark, solid line in the middle is the median (i.e. 50'th percentile).","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"import Plots\nPlots.plot(\n SDDP.publication_plot(simulations; title = \"Outgoing volume\") do data\n return data[:volume].out\n end,\n SDDP.publication_plot(simulations; title = \"Thermal generation\") do data\n return data[:thermal_generation]\n end;\n xlabel = \"Stage\",\n ylims = (0, 200),\n layout = (1, 2),\n)","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"You can save this plot as a PDF using the Plots.jl function savefig:","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"Plots.savefig(\"my_picture.pdf\")","category":"page"},{"location":"tutorial/plotting/#Plotting-the-value-function","page":"Plotting tools","title":"Plotting the value function","text":"","category":"section"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"You can obtain an object representing the value function of a node using SDDP.ValueFunction.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"V = SDDP.ValueFunction(model[(1, 1)])","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"The value function can be evaluated using SDDP.evaluate.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.evaluate(V; volume = 1)","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"evaluate returns the height of the value function, and a subgradient with respect to the convex state variables.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"You can also plot the value function using SDDP.plot","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.plot(V, volume = 0:200, filename = \"value_function.html\")","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"This should open a webpage that looks like this one.","category":"page"},{"location":"tutorial/plotting/#Convergence-dashboard","page":"Plotting tools","title":"Convergence dashboard","text":"","category":"section"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"If the text-based logging isn't to your liking, you can open a visualization of the training by passing dashboard = true to SDDP.train.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"SDDP.train(model; dashboard = true)","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"By default, dashboard = false because there is an initial overhead associated with opening and preparing the plot.","category":"page"},{"location":"tutorial/plotting/","page":"Plotting tools","title":"Plotting tools","text":"warning: Warning\nThe dashboard is experimental. There are known bugs associated with it, e.g., SDDP.jl#226.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"EditURL = \"the_farmers_problem.jl\"","category":"page"},{"location":"examples/the_farmers_problem/#The-farmer's-problem","page":"The farmer's problem","title":"The farmer's problem","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"This problem is taken from Section 1.1 of the book Birge, J. R., & Louveaux, F. (2011). Introduction to Stochastic Programming. New York, NY: Springer New York. Paragraphs in quotes are taken verbatim.","category":"page"},{"location":"examples/the_farmers_problem/#Problem-description","page":"The farmer's problem","title":"Problem description","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Consider a European farmer who specializes in raising wheat, corn, and sugar beets on his 500 acres of land. During the winter, [they want] to decide how much land to devote to each crop.The farmer knows that at least 200 tons (T) of wheat and 240 T of corn are needed for cattle feed. These amounts can be raised on the farm or bought from a wholesaler. Any production in excess of the feeding requirement would be sold.Over the last decade, mean selling prices have been $170 and $150 per ton of wheat and corn, respectively. The purchase prices are 40% more than this due to the wholesaler’s margin and transportation costs.Another profitable crop is sugar beet, which [they expect] to sell at $36/T; however, the European Commission imposes a quota on sugar beet production. Any amount in excess of the quota can be sold only at $10/T. The farmer’s quota for next year is 6000 T.\"Based on past experience, the farmer knows that the mean yield on [their] land is roughly 2.5 T, 3 T, and 20 T per acre for wheat, corn, and sugar beets, respectively.[To introduce uncertainty,] assume some correlation among the yields of the different crops. A very simplified representation of this would be to assume that years are good, fair, or bad for all crops, resulting in above average, average, or below average yields for all crops. To fix these ideas, above and below average indicate a yield 20% above or below the mean yield.","category":"page"},{"location":"examples/the_farmers_problem/#Problem-data","page":"The farmer's problem","title":"Problem data","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"The area of the farm.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"MAX_AREA = 500.0","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"There are three crops:","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"CROPS = [:wheat, :corn, :sugar_beet]","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Each of the crops has a different planting cost ($/acre).","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"PLANTING_COST = Dict(:wheat => 150.0, :corn => 230.0, :sugar_beet => 260.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"The farmer requires a minimum quantity of wheat and corn, but not of sugar beet (tonnes).","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"MIN_QUANTITIES = Dict(:wheat => 200.0, :corn => 240.0, :sugar_beet => 0.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"In Europe, there is a quota system for producing crops. The farmer owns the following quota for each crop (tonnes):","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"QUOTA_MAX = Dict(:wheat => Inf, :corn => Inf, :sugar_beet => 6_000.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"The farmer can sell crops produced under the quota for the following amounts ($/tonne):","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"SELL_IN_QUOTA = Dict(:wheat => 170.0, :corn => 150.0, :sugar_beet => 36.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"If they sell more than their allotted quota, the farmer earns the following on each tonne of crop above the quota ($/tonne):","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"SELL_NO_QUOTA = Dict(:wheat => 0.0, :corn => 0.0, :sugar_beet => 10.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"The purchase prices for wheat and corn are 40% more than their sales price. However, the description does not address the purchase price of sugar beet. Therefore, we use a large value of $1,000/tonne.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"BUY_PRICE = Dict(:wheat => 238.0, :corn => 210.0, :sugar_beet => 1_000.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"On average, each crop has the following yield in tonnes/acre:","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"MEAN_YIELD = Dict(:wheat => 2.5, :corn => 3.0, :sugar_beet => 20.0)","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"However, the yield is random. In good years, the yield is +20% above average, and in bad years, the yield is -20% below average.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"YIELD_MULTIPLIER = Dict(:good => 1.2, :fair => 1.0, :bad => 0.8)","category":"page"},{"location":"examples/the_farmers_problem/#Mathematical-formulation","page":"The farmer's problem","title":"Mathematical formulation","text":"","category":"section"},{"location":"examples/the_farmers_problem/#SDDP.jl-code","page":"The farmer's problem","title":"SDDP.jl code","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"note: Note\nIn what follows, we make heavy use of the fact that you can look up variables by their symbol name in a JuMP model as follows:@variable(model, x)\nmodel[:x]Read the JuMP documentation if this isn't familiar to you.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"First up, load SDDP.jl and a solver. For this example, we use HiGHS.jl.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"using SDDP, HiGHS","category":"page"},{"location":"examples/the_farmers_problem/#State-variables","page":"The farmer's problem","title":"State variables","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"State variables are the information that flows between stages. In our example, the state variables are the areas of land devoted to growing each crop.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"function add_state_variables(subproblem)\n @variable(subproblem, area[c = CROPS] >= 0, SDDP.State, initial_value = 0)\nend","category":"page"},{"location":"examples/the_farmers_problem/#First-stage-problem","page":"The farmer's problem","title":"First stage problem","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"We can only plant a maximum of 500 acres, and we want to minimize the planting cost","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"function create_first_stage_problem(subproblem)\n @constraint(\n subproblem,\n sum(subproblem[:area][c].out for c in CROPS) <= MAX_AREA\n )\n @stageobjective(\n subproblem,\n -sum(PLANTING_COST[c] * subproblem[:area][c].out for c in CROPS)\n )\nend","category":"page"},{"location":"examples/the_farmers_problem/#Second-stage-problem","page":"The farmer's problem","title":"Second stage problem","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Now let's consider the second stage problem. This is more complicated than the first stage, so we've broken it down into four sections:","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"control variables\nconstraints\nthe objective\nthe uncertainty","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"First, let's add the second stage control variables.","category":"page"},{"location":"examples/the_farmers_problem/#Variables","page":"The farmer's problem","title":"Variables","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"We add four types of control variables. Technically, the yield isn't a control variable. However, we add it as a dummy \"helper\" variable because it will be used when we add uncertainty.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"function second_stage_variables(subproblem)\n @variables(subproblem, begin\n 0 <= yield[c = CROPS] # tonnes/acre\n 0 <= buy[c = CROPS] # tonnes\n 0 <= sell_in_quota[c = CROPS] <= QUOTA_MAX[c] # tonnes\n 0 <= sell_no_quota[c = CROPS] # tonnes\n end)\nend","category":"page"},{"location":"examples/the_farmers_problem/#Constraints","page":"The farmer's problem","title":"Constraints","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"We need to define is the minimum quantity constraint. This ensures that MIN_QUANTITIES[c] of each crop is produced.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"function second_stage_constraint_min_quantity(subproblem)\n @constraint(\n subproblem,\n [c = CROPS],\n subproblem[:yield][c] + subproblem[:buy][c] -\n subproblem[:sell_in_quota][c] - subproblem[:sell_no_quota][c] >=\n MIN_QUANTITIES[c]\n )\nend","category":"page"},{"location":"examples/the_farmers_problem/#Objective","page":"The farmer's problem","title":"Objective","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"The objective of the second stage is to maximise revenue from selling crops, less the cost of buying corn and wheat if necessary to meet the minimum quantity constraint.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"function second_stage_objective(subproblem)\n @stageobjective(\n subproblem,\n sum(\n SELL_IN_QUOTA[c] * subproblem[:sell_in_quota][c] +\n SELL_NO_QUOTA[c] * subproblem[:sell_no_quota][c] -\n BUY_PRICE[c] * subproblem[:buy][c] for c in CROPS\n )\n )\nend","category":"page"},{"location":"examples/the_farmers_problem/#Random-variables","page":"The farmer's problem","title":"Random variables","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Then, in the SDDP.parameterize function, we set the coefficient using JuMP.set_normalized_coefficient.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"function second_stage_uncertainty(subproblem)\n @constraint(\n subproblem,\n uncertainty[c = CROPS],\n 1.0 * subproblem[:area][c].in == subproblem[:yield][c]\n )\n SDDP.parameterize(subproblem, [:good, :fair, :bad]) do ω\n for c in CROPS\n JuMP.set_normalized_coefficient(\n uncertainty[c],\n subproblem[:area][c].in,\n MEAN_YIELD[c] * YIELD_MULTIPLIER[ω],\n )\n end\n end\nend","category":"page"},{"location":"examples/the_farmers_problem/#Putting-it-all-together","page":"The farmer's problem","title":"Putting it all together","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Now we're ready to build the multistage stochastic programming model. In addition to the things already discussed, we need a few extra pieces of information.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"First, we are maximizing, so we set sense = :Max. Second, we need to provide a valid upper bound. (See Choosing an initial bound for more on this.) We know from Birge and Louveaux that the optimal solution is $108,390. So, let's choose $500,000 just to be safe.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Here is the full model.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"model = SDDP.LinearPolicyGraph(;\n stages = 2,\n sense = :Max,\n upper_bound = 500_000.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, stage\n add_state_variables(subproblem)\n if stage == 1\n create_first_stage_problem(subproblem)\n else\n second_stage_variables(subproblem)\n second_stage_constraint_min_quantity(subproblem)\n second_stage_uncertainty(subproblem)\n second_stage_objective(subproblem)\n end\nend","category":"page"},{"location":"examples/the_farmers_problem/#Training-a-policy","page":"The farmer's problem","title":"Training a policy","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Now that we've built a model, we need to train it using SDDP.train. The keyword iteration_limit stops the training after 40 iterations. See Choose a stopping rule for other ways to stop the training.","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"SDDP.train(model; iteration_limit = 40)","category":"page"},{"location":"examples/the_farmers_problem/#Checking-the-policy","page":"The farmer's problem","title":"Checking the policy","text":"","category":"section"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"Birge and Louveaux report that the optimal objective value is $108,390. Check that we got the correct solution using SDDP.calculate_bound:","category":"page"},{"location":"examples/the_farmers_problem/","page":"The farmer's problem","title":"The farmer's problem","text":"@assert isapprox(SDDP.calculate_bound(model), 108_390.0, atol = 0.1)","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"EditURL = \"warnings.jl\"","category":"page"},{"location":"tutorial/warnings/#Words-of-warning","page":"Words of warning","title":"Words of warning","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"SDDP is a powerful solution technique for multistage stochastic programming. However, there are a number of subtle things to be aware of before creating your own models.","category":"page"},{"location":"tutorial/warnings/#Relatively-complete-recourse","page":"Words of warning","title":"Relatively complete recourse","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"Models built in SDDP.jl need a property called relatively complete recourse.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"One definition of relatively complete recourse is that all feasible decisions (not necessarily optimal) in a subproblem lead to feasible decisions in future subproblems.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"For example, in the following problem, one feasible first stage decision is x.out = 0. But this causes an infeasibility in the second stage which requires x.in >= 1. This will throw an error about infeasibility if you try to solve.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"using SDDP, HiGHS\n\nmodel = SDDP.LinearPolicyGraph(;\n stages = 2,\n lower_bound = 0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(sp, x >= 0, SDDP.State, initial_value = 1)\n if t == 2\n @constraint(sp, x.in >= 1)\n end\n @stageobjective(sp, x.out)\nend\n\ntry #hide\n SDDP.train(model; iteration_limit = 1, print_level = 0)\ncatch err #hide\n showerror(stderr, err) #hide\nend #hide","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"warning: Warning\nThe actual constraints causing the infeasibilities can be deceptive! A good strategy to debug is to comment out all constraints. Then, one-by-one, un-comment the constraints and try resolving the model to check if it finds a feasible solution.","category":"page"},{"location":"tutorial/warnings/#Numerical-stability","page":"Words of warning","title":"Numerical stability","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"If you aren't aware, SDDP builds an outer-approximation to a convex function using cutting planes. This results in a formulation that is particularly hard for solvers like HiGHS, Gurobi, and CPLEX to deal with. As a result, you may run into weird behavior. This behavior could include:","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"Iterations suddenly taking a long time (the solver stalled)\nSubproblems turning infeasible or unbounded after many iterations\nSolvers returning \"Numerical Error\" statuses","category":"page"},{"location":"tutorial/warnings/#Problem-scaling","page":"Words of warning","title":"Problem scaling","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"In almost all cases, the cause of this is poor problem scaling. For our purpose, poor problem scaling means having variables with very large numbers and variables with very small numbers in the same model.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"tip: Tip\nGurobi has an excellent set of articles on numerical issues and how to avoid them.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"Consider, for example, the hydro-thermal scheduling problem we have been discussing in previous tutorials.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"If we define the volume of the reservoir in terms of m³, then a lake might have a capacity of 10^10 m³: @variable(subproblem, 0 <= volume <= 10^10). Moreover, the cost per cubic meter might be around $0.05/m³. To calculate the value of water in our reservoir, we need to multiple a variable on the order of 10^10, by one on the order of 10⁻²! That is twelve orders of magnitude!","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"To improve the performance of the SDDP algorithm (and reduce the chance of weird behavior), try to re-scale the units of the problem in order to reduce the largest difference in magnitude. For example, if we talk in terms of million m³, then we have a capacity of 10⁴ million m³, and a price of $50,000 per million m³. Now things are only one order of magnitude apart.","category":"page"},{"location":"tutorial/warnings/#Numerical-stability-report","page":"Words of warning","title":"Numerical stability report","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"To aid in the diagnose of numerical issues, you can call SDDP.numerical_stability_report. By default, this aggregates all of the nodes into a single report. You can produce a stability report for each node by passing by_node=true.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"using SDDP\n\nmodel =\n SDDP.LinearPolicyGraph(; stages = 2, lower_bound = -1e10) do subproblem, t\n @variable(subproblem, x >= -1e7, SDDP.State, initial_value = 1e-5)\n @constraint(subproblem, 1e9 * x.out >= 1e-6 * x.in + 1e-8)\n @stageobjective(subproblem, 1e9 * x.out)\n end\n\nSDDP.numerical_stability_report(model)","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"The report analyses the magnitude (in absolute terms) of the coefficients in the constraint matrix, the objective function, any variable bounds, and in the RHS of the constraints. A warning will be thrown in SDDP.jl detects very large or small values. As discussed in Problem scaling, this is an indication that you should reformulate your model.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"By default, a numerical stability check is run when you call SDDP.train, although it can be turned off by passing run_numerical_stability_report = false.","category":"page"},{"location":"tutorial/warnings/#Solver-specific-options","page":"Words of warning","title":"Solver-specific options","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"If you have a particularly troublesome model, you should investigate setting solver-specific options to improve the numerical stability of each solver. For example, Gurobi has a NumericFocus option.","category":"page"},{"location":"tutorial/warnings/#Choosing-an-initial-bound","page":"Words of warning","title":"Choosing an initial bound","text":"","category":"section"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"One of the important requirements when building a SDDP model is to choose an appropriate bound on the objective (lower if minimizing, upper if maximizing). However, it can be hard to choose a bound if you don't know the solution! (Which is very likely.)","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"The bound should not be as large as possible (since this will help with convergence and the numerical issues discussed above), but if chosen too small, it may cut off the feasible region and lead to a sub-optimal solution.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"Consider the following simple model, where we first set lower_bound to 0.0.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"using SDDP, HiGHS\n\nmodel = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, t\n @variable(subproblem, x >= 0, SDDP.State, initial_value = 2)\n @variable(subproblem, u >= 0)\n @variable(subproblem, v >= 0)\n @constraint(subproblem, x.out == x.in - u)\n @constraint(subproblem, u + v == 1.5)\n @stageobjective(subproblem, t * v)\nend\n\nSDDP.train(model; iteration_limit = 5, run_numerical_stability_report = false)","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"Now consider the case when we set the lower_bound to 10.0:","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"using SDDP, HiGHS\n\nmodel = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 10.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, t\n @variable(subproblem, x >= 0, SDDP.State, initial_value = 2)\n @variable(subproblem, u >= 0)\n @variable(subproblem, v >= 0)\n @constraint(subproblem, x.out == x.in - u)\n @constraint(subproblem, u + v == 1.5)\n @stageobjective(subproblem, t * v)\nend\n\nSDDP.train(model; iteration_limit = 5, run_numerical_stability_report = false)","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"How do we tell which is more appropriate? There are a few clues that you should look out for.","category":"page"},{"location":"tutorial/warnings/","page":"Words of warning","title":"Words of warning","text":"The bound converges to a value above (if minimizing) the simulated cost of the policy. In this case, the problem is deterministic, so it is easy to tell. But you can also check by performing a Monte Carlo simulation like we did in An introduction to SDDP.jl.\nThe bound converges to different values when we change the bound. This is another clear give-away. The bound provided by the user is only used in the initial iterations. It should not change the value of the converged policy. Thus, if you don't know an appropriate value for the bound, choose an initial value, and then increase (or decrease) the value of the bound to confirm that the value of the policy doesn't change.\nThe bound converges to a value close to the bound provided by the user. This varies between models, but notice that 11.0 is quite close to 10.0 compared with 3.5 and 0.0.","category":"page"},{"location":"guides/add_a_multidimensional_state_variable/#Add-a-multi-dimensional-state-variable","page":"Add a multi-dimensional state variable","title":"Add a multi-dimensional state variable","text":"","category":"section"},{"location":"guides/add_a_multidimensional_state_variable/","page":"Add a multi-dimensional state variable","title":"Add a multi-dimensional state variable","text":"DocTestSetup = quote\n using SDDP, HiGHS\nend","category":"page"},{"location":"guides/add_a_multidimensional_state_variable/","page":"Add a multi-dimensional state variable","title":"Add a multi-dimensional state variable","text":"Just like normal JuMP variables, it is possible to create containers of state variables.","category":"page"},{"location":"guides/add_a_multidimensional_state_variable/","page":"Add a multi-dimensional state variable","title":"Add a multi-dimensional state variable","text":"julia> model = SDDP.LinearPolicyGraph(\n stages=1, lower_bound = 0, optimizer = HiGHS.Optimizer\n ) do subproblem, t\n # A scalar state variable.\n @variable(subproblem, x >= 0, SDDP.State, initial_value = 0)\n println(\"Lower bound of outgoing x is: \", JuMP.lower_bound(x.out))\n # A vector of state variables.\n @variable(subproblem, y[i = 1:2] >= i, SDDP.State, initial_value = i)\n println(\"Lower bound of outgoing y[1] is: \", JuMP.lower_bound(y[1].out))\n # A JuMP.Containers.DenseAxisArray of state variables.\n @variable(subproblem,\n z[i = 3:4, j = [:A, :B]] >= i, SDDP.State, initial_value = i)\n println(\"Lower bound of outgoing z[3, :B] is: \", JuMP.lower_bound(z[3, :B].out))\n end;\nLower bound of outgoing x is: 0.0\nLower bound of outgoing y[1] is: 1.0\nLower bound of outgoing z[3, :B] is: 3.0","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"EditURL = \"objective_uncertainty.jl\"","category":"page"},{"location":"tutorial/objective_uncertainty/#Uncertainty-in-the-objective-function","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"","category":"section"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"In the previous tutorial, An introduction to SDDP.jl, we created a stochastic hydro-thermal scheduling model. In this tutorial, we extend the problem by adding uncertainty to the fuel costs.","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"Previously, we assumed that the fuel cost was deterministic: $50/MWh in the first stage, $100/MWh in the second stage, and $150/MWh in the third stage. For this tutorial, we assume that in addition to these base costs, the actual fuel cost is correlated with the inflows.","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"Our new model for the uncertainty is given by the following table:","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"ω 1 2 3\nP(ω) 1/3 1/3 1/3\ninflow 0 50 100\nfuel multiplier 1.5 1.0 0.75","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"In stage t, the objective is now to minimize:","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"fuel_multiplier * fuel_cost[t] * thermal_generation","category":"page"},{"location":"tutorial/objective_uncertainty/#Creating-a-model","page":"Uncertainty in the objective function","title":"Creating a model","text":"","category":"section"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"To add an uncertain objective, we can simply call @stageobjective from inside the SDDP.parameterize function.","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"using SDDP, HiGHS\n\nmodel = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, t\n # Define the state variable.\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n # Define the control variables.\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n # Define the constraints\n @constraints(\n subproblem,\n begin\n volume.out == volume.in + inflow - hydro_generation - hydro_spill\n thermal_generation + hydro_generation == 150.0\n end\n )\n fuel_cost = [50.0, 100.0, 150.0]\n # Parameterize the subproblem.\n Ω = [\n (inflow = 0.0, fuel_multiplier = 1.5),\n (inflow = 50.0, fuel_multiplier = 1.0),\n (inflow = 100.0, fuel_multiplier = 0.75),\n ]\n SDDP.parameterize(subproblem, Ω, [1 / 3, 1 / 3, 1 / 3]) do ω\n JuMP.fix(inflow, ω.inflow)\n @stageobjective(\n subproblem,\n ω.fuel_multiplier * fuel_cost[t] * thermal_generation\n )\n end\nend","category":"page"},{"location":"tutorial/objective_uncertainty/#Training-and-simulating-the-policy","page":"Uncertainty in the objective function","title":"Training and simulating the policy","text":"","category":"section"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"As in the previous two tutorials, we train and simulate the policy:","category":"page"},{"location":"tutorial/objective_uncertainty/","page":"Uncertainty in the objective function","title":"Uncertainty in the objective function","text":"SDDP.train(model)\n\nsimulations = SDDP.simulate(model, 500)\n\nobjective_values =\n [sum(stage[:stage_objective] for stage in sim) for sim in simulations]\n\nusing Statistics\n\nμ = round(mean(objective_values); digits = 2)\nci = round(1.96 * std(objective_values) / sqrt(500); digits = 2)\n\nprintln(\"Confidence interval: \", μ, \" ± \", ci)\nprintln(\"Lower bound: \", round(SDDP.calculate_bound(model); digits = 2))","category":"page"},{"location":"guides/add_a_risk_measure/#Add-a-risk-measure","page":"Add a risk measure","title":"Add a risk measure","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"DocTestSetup = quote\n using SDDP, HiGHS\nend","category":"page"},{"location":"guides/add_a_risk_measure/#Training-a-risk-averse-model","page":"Add a risk measure","title":"Training a risk-averse model","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.jl supports a variety of risk measures. Two common ones are SDDP.Expectation and SDDP.WorstCase. Let's see how to train a policy using them. There are three possible ways.","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"If the same risk measure is used at every node in the policy graph, we can just pass an instance of one of the risk measures to the risk_measure keyword argument of the SDDP.train function.","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.train(\n model,\n risk_measure = SDDP.WorstCase(),\n iteration_limit = 10\n)","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"However, if you want different risk measures at different nodes, there are two options. First, you can pass risk_measure a dictionary of risk measures, with one entry for each node. The keys of the dictionary are the indices of the nodes.","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.train(\n model,\n risk_measure = Dict(\n 1 => SDDP.Expectation(),\n 2 => SDDP.WorstCase()\n ),\n iteration_limit = 10\n)","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"An alternative method is to pass risk_measure a function that takes one argument, the index of a node, and returns an instance of a risk measure:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.train(\n model,\n risk_measure = (node_index) -> begin\n if node_index == 1\n return SDDP.Expectation()\n else\n return SDDP.WorstCase()\n end\n end,\n iteration_limit = 10\n)","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"note: Note\nIf you simulate the policy, the simulated value is the risk-neutral value of the policy.","category":"page"},{"location":"guides/add_a_risk_measure/#Risk-measures","page":"Add a risk measure","title":"Risk measures","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"To illustrate the risk-measures included in SDDP.jl, we consider a discrete random variable with four outcomes.","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"The random variable is supported on the values 1, 2, 3, and 4:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"noise_supports = [1, 2, 3, 4]","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"The associated probability of each outcome is as follows:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"nominal_probability = [0.1, 0.2, 0.3, 0.4]","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"With each outcome ω, the agent observes a cost Z(ω):","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"cost_realizations = [5.0, 4.0, 6.0, 2.0]","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"We assume that we are minimizing:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"is_minimization = true","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"Finally, we create a vector that will be used to store the risk-adjusted probabilities:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"risk_adjusted_probability = zeros(4)","category":"page"},{"location":"guides/add_a_risk_measure/#Expectation","page":"Add a risk measure","title":"Expectation","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.Expectation","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.Expectation","page":"Add a risk measure","title":"SDDP.Expectation","text":"Expectation()\n\nThe Expectation risk measure. Identical to taking the expectation with respect to the nominal distribution.\n\n\n\n\n\n","category":"type"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"using SDDP\nSDDP.adjust_probability(\n SDDP.Expectation(),\n risk_adjusted_probability,\n nominal_probability,\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.Expectation is the default risk measure in SDDP.jl.","category":"page"},{"location":"guides/add_a_risk_measure/#Worst-case","page":"Add a risk measure","title":"Worst-case","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.WorstCase","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.WorstCase","page":"Add a risk measure","title":"SDDP.WorstCase","text":"WorstCase()\n\nThe worst-case risk measure. Places all of the probability weight on the worst outcome.\n\n\n\n\n\n","category":"type"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.adjust_probability(\n SDDP.WorstCase(),\n risk_adjusted_probability,\n nominal_probability,\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"guides/add_a_risk_measure/#Average-value-at-risk-(AV@R)","page":"Add a risk measure","title":"Average value at risk (AV@R)","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.AVaR","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.AVaR","page":"Add a risk measure","title":"SDDP.AVaR","text":"AVaR(β)\n\nThe average value at risk (AV@R) risk measure.\n\nComputes the expectation of the β fraction of worst outcomes. β must be in [0, 1]. When β=1, this is equivalent to the Expectation risk measure. When β=0, this is equivalent to the WorstCase risk measure.\n\nAV@R is also known as the conditional value at risk (CV@R) or expected shortfall.\n\n\n\n\n\n","category":"type"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.adjust_probability(\n SDDP.AVaR(0.5),\n risk_adjusted_probability,\n nominal_probability,\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"guides/add_a_risk_measure/#Convex-combination-of-risk-measures","page":"Add a risk measure","title":"Convex combination of risk measures","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"Using the axioms of coherent risk measures, it is easy to show that any convex combination of coherent risk measures is also a coherent risk measure. Convex combinations of risk measures can be created directly:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"cvx_comb_measure = 0.5 * SDDP.Expectation() + 0.5 * SDDP.WorstCase()\nSDDP.adjust_probability(\n cvx_comb_measure,\n risk_adjusted_probability,\n nominal_probability,\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"As a special case, the SDDP.EAVaR risk-measure is a convex combination of SDDP.Expectation and SDDP.AVaR:","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.EAVaR(beta=0.25, lambda=0.4)","category":"page"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.EAVaR","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.EAVaR","page":"Add a risk measure","title":"SDDP.EAVaR","text":"EAVaR(;lambda=1.0, beta=1.0)\n\nA risk measure that is a convex combination of Expectation and Average Value @ Risk (also called Conditional Value @ Risk).\n\n λ * E[x] + (1 - λ) * AV@R(β)[x]\n\nKeyword Arguments\n\nlambda: Convex weight on the expectation ((1-lambda) weight is put on the AV@R component. Inreasing values of lambda are less risk averse (more weight on expectation).\nbeta: The quantile at which to calculate the Average Value @ Risk. Increasing values of beta are less risk averse. If beta=0, then the AV@R component is the worst case risk measure.\n\n\n\n\n\n","category":"function"},{"location":"guides/add_a_risk_measure/#Distributionally-robust","page":"Add a risk measure","title":"Distributionally robust","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.jl supports two types of distributionally robust risk measures: the modified Χ² method of Philpott et al. (2018), and a method based on the Wasserstein distance metric.","category":"page"},{"location":"guides/add_a_risk_measure/#Modified-Chi-squard","page":"Add a risk measure","title":"Modified Chi-squard","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.ModifiedChiSquared","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.ModifiedChiSquared","page":"Add a risk measure","title":"SDDP.ModifiedChiSquared","text":"ModifiedChiSquared(radius::Float64; minimum_std=1e-5)\n\nThe distributionally robust SDDP risk measure of Philpott, A., de Matos, V., Kapelevich, L. Distributionally robust SDDP. Computational Management Science (2018) 165:431-454.\n\nExplanation\n\nIn a Distributionally Robust Optimization (DRO) approach, we modify the probabilities we associate with all future scenarios so that the resulting probability distribution is the \"worst case\" probability distribution, in some sense.\n\nIn each backward pass we will compute a worst case probability distribution vector p. We compute p so that:\n\np ∈ argmax p'z\n s.t. [r; p - a] in SecondOrderCone()\n sum(p) == 1\n p >= 0\n\nwhere\n\nz is a vector of future costs. We assume that our aim is to minimize future cost p'z. If we maximize reward, we would have p ∈ argmin{p'z}.\na is the uniform distribution\nr is a user specified radius - the larger the radius, the more conservative the policy.\n\nNotes\n\nThe largest radius that will work with S scenarios is sqrt((S-1)/S).\n\nIf the uncorrected standard deviation of the objecive realizations is less than minimum_std, then the risk-measure will default to Expectation().\n\nThis code was contributed by Lea Kapelevich.\n\n\n\n\n\n","category":"type"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.adjust_probability(\n SDDP.ModifiedChiSquared(0.5),\n risk_adjusted_probability,\n [0.25, 0.25, 0.25, 0.25],\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"guides/add_a_risk_measure/#Wasserstein","page":"Add a risk measure","title":"Wasserstein","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.Wasserstein","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.Wasserstein","page":"Add a risk measure","title":"SDDP.Wasserstein","text":"Wasserstein(norm::Function, solver_factory; alpha::Float64)\n\nA distributionally-robust risk measure based on the Wasserstein distance.\n\nAs alpha increases, the measure becomes more risk-averse. When alpha=0, the measure is equivalent to the expectation operator. As alpha increases, the measure approaches the Worst-case risk measure.\n\n\n\n\n\n","category":"type"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"import HiGHS\nSDDP.adjust_probability(\n SDDP.Wasserstein(HiGHS.Optimizer; alpha=0.5) do x, y\n return abs(x - y)\n end,\n risk_adjusted_probability,\n nominal_probability,\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"guides/add_a_risk_measure/#Entropic","page":"Add a risk measure","title":"Entropic","text":"","category":"section"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.Entropic","category":"page"},{"location":"guides/add_a_risk_measure/#SDDP.Entropic","page":"Add a risk measure","title":"SDDP.Entropic","text":"Entropic(γ::Float64)\n\nThe entropic risk measure as described by:\n\nDowson, O., Morton, D.P. & Pagnoncelli, B.K. Incorporating convex risk\nmeasures into multistage stochastic programming algorithms. Annals of\nOperations Research (2022). [doi](https://doi.org/10.1007/s10479-022-04977-w).\n\nAs γ increases, the measure becomes more risk-averse.\n\n\n\n\n\n","category":"type"},{"location":"guides/add_a_risk_measure/","page":"Add a risk measure","title":"Add a risk measure","text":"SDDP.adjust_probability(\n SDDP.Entropic(0.1),\n risk_adjusted_probability,\n nominal_probability,\n noise_supports,\n cost_realizations,\n is_minimization\n)\nrisk_adjusted_probability","category":"page"},{"location":"examples/infinite_horizon_trivial/","page":"Infinite horizon trivial","title":"Infinite horizon trivial","text":"EditURL = \"infinite_horizon_trivial.jl\"","category":"page"},{"location":"examples/infinite_horizon_trivial/#Infinite-horizon-trivial","page":"Infinite horizon trivial","title":"Infinite horizon trivial","text":"","category":"section"},{"location":"examples/infinite_horizon_trivial/","page":"Infinite horizon trivial","title":"Infinite horizon trivial","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/infinite_horizon_trivial/","page":"Infinite horizon trivial","title":"Infinite horizon trivial","text":"using SDDP, HiGHS, Test\n\nfunction infinite_trivial()\n graph = SDDP.Graph(\n :root_node,\n [:week],\n [(:root_node => :week, 1.0), (:week => :week, 0.9)],\n )\n model = SDDP.PolicyGraph(\n graph;\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n @variable(subproblem, state, SDDP.State, initial_value = 0)\n @constraint(subproblem, state.in == state.out)\n @stageobjective(subproblem, 2.0)\n end\n SDDP.train(model; log_frequency = 10)\n @test SDDP.calculate_bound(model) ≈ 2.0 / (1 - 0.9) atol = 1e-3\n return\nend\n\ninfinite_trivial()","category":"page"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"EditURL = \"air_conditioning.jl\"","category":"page"},{"location":"examples/air_conditioning/#Air-conditioning","page":"Air conditioning","title":"Air conditioning","text":"","category":"section"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"Taken from Anthony Papavasiliou's notes on SDDP","category":"page"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"Consider the following problem","category":"page"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"Produce air conditioners for 3 months\n200 units/month at 100 $/unit\nOvertime costs 300 $/unit\nKnown demand of 100 units for period 1\nEqually likely demand, 100 or 300 units, for periods 2, 3\nStorage cost is 50 $/unit\nAll demand must be met","category":"page"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"The known optimal solution is $62,500","category":"page"},{"location":"examples/air_conditioning/","page":"Air conditioning","title":"Air conditioning","text":"using SDDP, HiGHS, Test\n\nfunction air_conditioning_model(duality_handler)\n model = SDDP.LinearPolicyGraph(;\n stages = 3,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, stage\n @variable(\n sp,\n 0 <= stored_production <= 100,\n Int,\n SDDP.State,\n initial_value = 0\n )\n @variable(sp, 0 <= production <= 200, Int)\n @variable(sp, overtime >= 0, Int)\n @variable(sp, demand)\n DEMAND = [[100.0], [100.0, 300.0], [100.0, 300.0]]\n SDDP.parameterize(ω -> JuMP.fix(demand, ω), sp, DEMAND[stage])\n @constraint(\n sp,\n stored_production.out ==\n stored_production.in + production + overtime - demand\n )\n @stageobjective(\n sp,\n 100 * production + 300 * overtime + 50 * stored_production.out\n )\n end\n SDDP.train(model; duality_handler = duality_handler)\n @test isapprox(SDDP.calculate_bound(model), 62_500.0, atol = 0.1)\n return\nend\n\nfor duality_handler in [SDDP.LagrangianDuality(), SDDP.ContinuousConicDuality()]\n air_conditioning_model(duality_handler)\nend","category":"page"},{"location":"examples/sldp_example_two/","page":"SLDP: example 2","title":"SLDP: example 2","text":"EditURL = \"sldp_example_two.jl\"","category":"page"},{"location":"examples/sldp_example_two/#SLDP:-example-2","page":"SLDP: example 2","title":"SLDP: example 2","text":"","category":"section"},{"location":"examples/sldp_example_two/","page":"SLDP: example 2","title":"SLDP: example 2","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/sldp_example_two/","page":"SLDP: example 2","title":"SLDP: example 2","text":"This example is derived from Section 4.3 of the paper: Ahmed, S., Cabral, F. G., & da Costa, B. F. P. (2019). Stochastic Lipschitz Dynamic Programming. Optimization Online. PDF","category":"page"},{"location":"examples/sldp_example_two/","page":"SLDP: example 2","title":"SLDP: example 2","text":"using SDDP\nimport HiGHS\nimport Test\n\nfunction sldp_example_two(; first_stage_integer::Bool = true, N = 2)\n model = SDDP.LinearPolicyGraph(;\n stages = 2,\n lower_bound = -100.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n @variable(sp, 0 <= x[1:2] <= 5, SDDP.State, initial_value = 0.0)\n if t == 1\n if first_stage_integer\n @variable(sp, 0 <= u[1:2] <= 5, Int)\n @constraint(sp, [i = 1:2], u[i] == x[i].out)\n end\n @stageobjective(sp, -1.5 * x[1].out - 4 * x[2].out)\n else\n @variable(sp, 0 <= y[1:4] <= 1, Bin)\n @variable(sp, ω[1:2])\n @stageobjective(sp, -16 * y[1] - 19 * y[2] - 23 * y[3] - 28 * y[4])\n @constraint(\n sp,\n 2 * y[1] + 3 * y[2] + 4 * y[3] + 5 * y[4] <= ω[1] - x[1].in\n )\n @constraint(\n sp,\n 6 * y[1] + 1 * y[2] + 3 * y[3] + 2 * y[4] <= ω[2] - x[2].in\n )\n steps = range(5; stop = 15, length = N)\n SDDP.parameterize(sp, [[i, j] for i in steps for j in steps]) do φ\n return JuMP.fix.(ω, φ)\n end\n end\n end\n if get(ARGS, 1, \"\") == \"--write\"\n # Run `$ julia sldp_example_two.jl --write` to update the benchmark\n # model directory\n model_dir = joinpath(@__DIR__, \"..\", \"..\", \"..\", \"benchmarks\", \"models\")\n SDDP.write_to_file(\n model,\n joinpath(model_dir, \"sldp_example_two_$(N).sof.json.gz\");\n test_scenarios = 30,\n )\n return\n end\n SDDP.train(model; log_frequency = 10)\n bound = SDDP.calculate_bound(model)\n\n if N == 2\n Test.@test bound <= -57.0\n elseif N == 3\n Test.@test bound <= -59.33\n elseif N == 6\n Test.@test bound <= -61.22\n end\n return\nend\n\nsldp_example_two(; N = 2)\nsldp_example_two(; N = 3)\nsldp_example_two(; N = 6)","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"EditURL = \"objective_states.jl\"","category":"page"},{"location":"tutorial/objective_states/#Objective-states","page":"Objective states","title":"Objective states","text":"","category":"section"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"There are many applications in which we want to model a price process that follows some auto-regressive process. Common examples include stock prices on financial exchanges and spot-prices in energy markets.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"However, it is well known that these cannot be incorporated in to SDDP because they result in cost-to-go functions that are convex with respect to some state variables (e.g., the reservoir levels) and concave with respect to other state variables (e.g., the spot price in the current stage).","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"To overcome this problem, the approach in the literature has been to discretize the price process in order to model it using a Markovian policy graph like those discussed in Markovian policy graphs.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"However, recent work offers a way to include stagewise-dependent objective uncertainty into the objective function of SDDP subproblems. Readers are directed to the following works for an introduction:","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"Downward, A., Dowson, O., and Baucke, R. (2017). Stochastic dual dynamic programming with stagewise dependent objective uncertainty. Optimization Online. link\nDowson, O. PhD Thesis. University of Auckland, 2018. link","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"The method discussed in the above works introduces the concept of an objective state into SDDP. Unlike normal state variables in SDDP (e.g., the volume of water in the reservoir), the cost-to-go function is concave with respect to the objective states. Thus, the method builds an outer approximation of the cost-to-go function in the normal state-space, and an inner approximation of the cost-to-go function in the objective state-space.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"warning: Warning\nSupport for objective states in SDDP.jl is experimental. Models are considerably more computational intensive, the interface is less user-friendly, and there are subtle gotchas to be aware of. Only use this if you have read and understood the theory behind the method.","category":"page"},{"location":"tutorial/objective_states/#One-dimensional-objective-states","page":"Objective states","title":"One-dimensional objective states","text":"","category":"section"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"Let's assume that the fuel cost is not fixed, but instead evolves according to a multiplicative auto-regressive process: fuel_cost[t] = ω * fuel_cost[t-1], where ω is drawn from the sample space [0.75, 0.9, 1.1, 1.25] with equal probability.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"An objective state can be added to a subproblem using the SDDP.add_objective_state function. This can only be called once per subproblem. If you want to add a multi-dimensional objective state, read Multi-dimensional objective states. SDDP.add_objective_state takes a number of keyword arguments. The two required ones are","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"initial_value: the value of the objective state at the root node of the policy graph (i.e., identical to the initial_value when defining normal state variables.\nlipschitz: the Lipschitz constant of the cost-to-go function with respect to the objective state. In other words, this value is the maximum change in the cost-to-go function at any point in the state space, given a one-unit change in the objective state.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"There are also two optional keyword arguments: lower_bound and upper_bound, which give SDDP.jl hints (importantly, not constraints) about the domain of the objective state. Setting these bounds appropriately can improve the speed of convergence.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"Finally, SDDP.add_objective_state requires an update function. This function takes two arguments. The first is the incoming value of the objective state, and the second is the realization of the stagewise-independent noise term (set using SDDP.parameterize). The function should return the value of the objective state to be used in the current subproblem.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"This connection with the stagewise-independent noise term means that SDDP.parameterize must be called in a subproblem that defines an objective state. Inside SDDP.parameterize, the value of the objective state to be used in the current subproblem (i.e., after the update function), can be queried using SDDP.objective_state.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"Here is the full model with the objective state.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"using SDDP, HiGHS\n\nmodel = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, t\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n @constraints(\n subproblem,\n begin\n volume.out == volume.in + inflow - hydro_generation - hydro_spill\n demand_constraint, thermal_generation + hydro_generation == 150.0\n end\n )\n\n # Add an objective state. ω will be the same value that is called in\n # `SDDP.parameterize`.\n\n SDDP.add_objective_state(\n subproblem;\n initial_value = 50.0,\n lipschitz = 10_000.0,\n lower_bound = 50.0,\n upper_bound = 150.0,\n ) do fuel_cost, ω\n return ω.fuel * fuel_cost\n end\n\n # Create the cartesian product of a multi-dimensional random variable.\n\n Ω = [\n (fuel = f, inflow = w) for f in [0.75, 0.9, 1.1, 1.25] for\n w in [0.0, 50.0, 100.0]\n ]\n\n SDDP.parameterize(subproblem, Ω) do ω\n # Query the current fuel cost.\n fuel_cost = SDDP.objective_state(subproblem)\n @stageobjective(subproblem, fuel_cost * thermal_generation)\n return JuMP.fix(inflow, ω.inflow)\n end\nend","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"After creating our model, we can train and simulate as usual.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"SDDP.train(model; run_numerical_stability_report = false)\n\nsimulations = SDDP.simulate(model, 1)\n\nprint(\"Finished training and simulating.\")","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"To demonstrate how the objective states are updated, consider the sequence of noise observations:","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"[stage[:noise_term] for stage in simulations[1]]","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"This, the fuel cost in the first stage should be 0.75 * 50 = 37.5. The fuel cost in the second stage should be 1.1 * 37.5 = 41.25. The fuel cost in the third stage should be 0.75 * 41.25 = 30.9375.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"To confirm this, the values of the objective state in a simulation can be queried using the :objective_state key.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"[stage[:objective_state] for stage in simulations[1]]","category":"page"},{"location":"tutorial/objective_states/#Multi-dimensional-objective-states","page":"Objective states","title":"Multi-dimensional objective states","text":"","category":"section"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"You can construct multi-dimensional price processes using NTuples. Just replace every scalar value associated with the objective state by a tuple. For example, initial_value = 1.0 becomes initial_value = (1.0, 2.0).","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"Here is an example:","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"model = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, t\n @variable(subproblem, 0 <= volume <= 200, SDDP.State, initial_value = 200)\n @variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n @constraints(\n subproblem,\n begin\n volume.out == volume.in + inflow - hydro_generation - hydro_spill\n demand_constraint, thermal_generation + hydro_generation == 150.0\n end\n )\n\n SDDP.add_objective_state(\n subproblem;\n initial_value = (50.0, 50.0),\n lipschitz = (10_000.0, 10_000.0),\n lower_bound = (50.0, 50.0),\n upper_bound = (150.0, 150.0),\n ) do fuel_cost, ω\n # fuel_cost is a tuple, containing the (fuel_cost[t-1], fuel_cost[t-2])\n # This function returns a new tuple containing\n # (fuel_cost[t], fuel_cost[t-1]). Thus, we need to compute the new\n # cost:\n new_cost = fuel_cost[1] + 0.5 * (fuel_cost[1] - fuel_cost[2]) + ω.fuel\n # And then return the appropriate tuple:\n return (new_cost, fuel_cost[1])\n end\n\n Ω = [\n (fuel = f, inflow = w) for f in [-10.0, -5.0, 5.0, 10.0] for\n w in [0.0, 50.0, 100.0]\n ]\n\n SDDP.parameterize(subproblem, Ω) do ω\n fuel_cost, _ = SDDP.objective_state(subproblem)\n @stageobjective(subproblem, fuel_cost * thermal_generation)\n return JuMP.fix(inflow, ω.inflow)\n end\nend\n\nSDDP.train(model; run_numerical_stability_report = false)\n\nsimulations = SDDP.simulate(model, 1)\n\nprint(\"Finished training and simulating.\")","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"This time, since our objective state is two-dimensional, the objective states are tuples with two elements:","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"[stage[:objective_state] for stage in simulations[1]]","category":"page"},{"location":"tutorial/objective_states/#objective_state_warnings","page":"Objective states","title":"Warnings","text":"","category":"section"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"There are number of things to be aware of when using objective states.","category":"page"},{"location":"tutorial/objective_states/","page":"Objective states","title":"Objective states","text":"The key assumption is that price is independent of the states and actions in the model.\nThat means that the price cannot appear in any @constraints. Nor can you use any @variables in the update function.\nChoosing an appropriate Lipschitz constant is difficult.\nThe points discussed in Choosing an initial bound are relevant. The Lipschitz constant should not be chosen as large as possible (since this will help with convergence and the numerical issues discussed above), but if chosen to small, it may cut of the feasible region and lead to a sub-optimal solution.\nYou need to ensure that the cost-to-go function is concave with respect to the objective state before the update.\nIf the update function is linear, this is always the case. In some situations, the update function can be nonlinear (e.g., multiplicative as we have above). In general, placing constraints on the price (e.g., clamp(price, 0, 1)) will destroy concavity. Caveat emptor. It's up to you if this is a problem. If it isn't you'll get a good heuristic with no guarantee of global optimality.","category":"page"},{"location":"examples/air_conditioning_forward/","page":"Training with a different forward model","title":"Training with a different forward model","text":"EditURL = \"air_conditioning_forward.jl\"","category":"page"},{"location":"examples/air_conditioning_forward/#Training-with-a-different-forward-model","page":"Training with a different forward model","title":"Training with a different forward model","text":"","category":"section"},{"location":"examples/air_conditioning_forward/","page":"Training with a different forward model","title":"Training with a different forward model","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/air_conditioning_forward/","page":"Training with a different forward model","title":"Training with a different forward model","text":"using SDDP\nimport HiGHS\nimport Test\n\nfunction create_air_conditioning_model(; convex::Bool)\n return SDDP.LinearPolicyGraph(;\n stages = 3,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n ) do sp, t\n @variable(sp, 0 <= x <= 100, SDDP.State, initial_value = 0)\n @variable(sp, 0 <= u_production <= 200)\n @variable(sp, u_overtime >= 0)\n if !convex\n set_integer(x.out)\n set_integer(u_production)\n set_integer(u_overtime)\n end\n @constraint(sp, demand, x.in - x.out + u_production + u_overtime == 0)\n Ω = [[100.0], [100.0, 300.0], [100.0, 300.0]]\n SDDP.parameterize(ω -> JuMP.set_normalized_rhs(demand, ω), sp, Ω[t])\n @stageobjective(sp, 100 * u_production + 300 * u_overtime + 50 * x.out)\n end\nend\n\nconvex = create_air_conditioning_model(; convex = true)\nnon_convex = create_air_conditioning_model(; convex = false)\nSDDP.train(\n convex;\n forward_pass = SDDP.AlternativeForwardPass(non_convex),\n post_iteration_callback = SDDP.AlternativePostIterationCallback(non_convex),\n iteration_limit = 10,\n)\nTest.@test isapprox(SDDP.calculate_bound(non_convex), 62_500.0, atol = 0.1)\nTest.@test isapprox(SDDP.calculate_bound(convex), 62_500.0, atol = 0.1)","category":"page"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":"EditURL = \"objective_state_newsvendor.jl\"","category":"page"},{"location":"examples/objective_state_newsvendor/#Newsvendor","page":"Newsvendor","title":"Newsvendor","text":"","category":"section"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":"This example is based on the classical newsvendor problem, but features an AR(1) spot-price.","category":"page"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":" V(x[t-1], ω[t]) = max p[t] × u[t]\n subject to x[t] = x[t-1] - u[t] + ω[t]\n u[t] ∈ [0, 1]\n x[t] ≥ 0\n p[t] = p[t-1] + ϕ[t]","category":"page"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":"The initial conditions are","category":"page"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":"x[0] = 2.0\np[0] = 1.5\nω[t] ~ {0, 0.05, 0.10, ..., 0.45, 0.5} with uniform probability.\nϕ[t] ~ {-0.25, -0.125, 0.125, 0.25} with uniform probability.","category":"page"},{"location":"examples/objective_state_newsvendor/","page":"Newsvendor","title":"Newsvendor","text":"using SDDP, HiGHS, Statistics, Test\n\nfunction joint_distribution(; kwargs...)\n names = tuple([first(kw) for kw in kwargs]...)\n values = tuple([last(kw) for kw in kwargs]...)\n output_type = NamedTuple{names,Tuple{eltype.(values)...}}\n distribution = map(output_type, Base.product(values...))\n return distribution[:]\nend\n\nfunction newsvendor_example(; cut_type)\n model = SDDP.PolicyGraph(\n SDDP.LinearGraph(3);\n sense = :Max,\n upper_bound = 50.0,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, stage\n @variables(subproblem, begin\n x >= 0, (SDDP.State, initial_value = 2)\n 0 <= u <= 1\n w\n end)\n @constraint(subproblem, x.out == x.in - u + w)\n SDDP.add_objective_state(\n subproblem;\n initial_value = 1.5,\n lower_bound = 0.75,\n upper_bound = 2.25,\n lipschitz = 100.0,\n ) do y, ω\n return y + ω.price_noise\n end\n noise_terms = joint_distribution(;\n demand = 0:0.05:0.5,\n price_noise = [-0.25, -0.125, 0.125, 0.25],\n )\n SDDP.parameterize(subproblem, noise_terms) do ω\n JuMP.fix(w, ω.demand)\n price = SDDP.objective_state(subproblem)\n @stageobjective(subproblem, price * u)\n end\n end\n SDDP.train(\n model;\n log_frequency = 10,\n time_limit = 20.0,\n cut_type = cut_type,\n )\n @test SDDP.calculate_bound(model) ≈ 4.04 atol = 0.05\n results = SDDP.simulate(model, 500)\n objectives =\n [sum(s[:stage_objective] for s in simulation) for simulation in results]\n @test round(Statistics.mean(objectives); digits = 2) ≈ 4.04 atol = 0.1\n return\nend\n\nnewsvendor_example(; cut_type = SDDP.SINGLE_CUT)\nnewsvendor_example(; cut_type = SDDP.MULTI_CUT)","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"EditURL = \"arma.jl\"","category":"page"},{"location":"tutorial/arma/#Auto-regressive-stochastic-processes","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"","category":"section"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"SDDP.jl assumes that the random variable in each node is independent of the random variables in all other nodes. However, a common request is to model the random variables by some auto-regressive process.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"There are two ways to do this:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"model the random variable as a Markov chain\nuse the \"state-space expansion\" trick","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"info: Info\nThis tutorial is in the context of a hydro-thermal scheduling example, but it should be apparent how the ideas transfer to other applications.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"using SDDP\nimport HiGHS","category":"page"},{"location":"tutorial/arma/#state-space-expansion","page":"Auto-regressive stochastic processes","title":"The state-space expansion trick","text":"","category":"section"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"In An introduction to SDDP.jl, we assumed that the inflows were stagewise-independent. However, in many cases this is not correct, and inflow models are more accurately described by an auto-regressive process such as:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"inflow_t = inflow_t-1 + varepsilon","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"Here varepsilon is a random variable, and the inflow in stage t is the inflow in stage t-1 plus varepsilon (which might be negative).","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"For simplicity, we omit any coefficients and other terms, but this could easily be extended to a model like","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"inflow_t = a times inflow_t-1 + b + varepsilon","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"In practice, you can estimate a distribution for varepsilon by fitting the chosen statistical model to historical data, and then using the empirical residuals.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"To implement the auto-regressive model in SDDP.jl, we introduce inflow as a state variable.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"tip: Tip\nOur rule of thumb for \"when is something a state variable?\" is: if you need the value of a variable from a previous stage to compute something in stage t, then that variable is a state variable.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"model = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(sp, 0 <= x <= 200, SDDP.State, initial_value = 200)\n @variable(sp, g_t >= 0)\n @variable(sp, g_h >= 0)\n @variable(sp, s >= 0)\n @constraint(sp, g_h + g_t == 150)\n c = [50, 100, 150]\n @stageobjective(sp, c[t] * g_t)\n # =========================================================================\n # New stuff below Here\n # Add inflow as a state\n @variable(sp, inflow, SDDP.State, initial_value = 50.0)\n # Add the random variable as a control variable\n @variable(sp, ε)\n # The equation describing our statistical model\n @constraint(sp, inflow.out == inflow.in + ε)\n # The new water balance constraint using the state variable\n @constraint(sp, x.out == x.in - g_h - s + inflow.out)\n # Assume we have some empirical residuals:\n Ω = [-10.0, 0.1, 9.6]\n SDDP.parameterize(sp, Ω) do ω\n return JuMP.fix(ε, ω)\n end\nend","category":"page"},{"location":"tutorial/arma/#When-can-this-trick-be-used?","page":"Auto-regressive stochastic processes","title":"When can this trick be used?","text":"","category":"section"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The state-space expansion trick should be used when:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The random variable appears additively in the objective or in the constraints. Something like inflow * decision_variable will not work.\nThe statistical model is linear, or can be written using the JuMP @constraint macro.\nThe dimension of the random variable is small (see Vector auto-regressive models for the multi-variate case).","category":"page"},{"location":"tutorial/arma/#The-Markov-chain-approach","page":"Auto-regressive stochastic processes","title":"The Markov chain approach","text":"","category":"section"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"In the Markov chain approach, we model the stochastic process for inflow by a discrete Markov chain. Markov chains are nodes with transition probabilities between the nodes. SDDP.jl has good support for solving problems in which the uncertainty is formulated as a Markov chain.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The first step of the Markov chain approach is to write a function which simulates the stochastic process. Here is a simulator for our inflow model:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"function simulator()\n inflow = zeros(3)\n current = 50.0\n Ω = [-10.0, 0.1, 9.6]\n for t in 1:3\n current += rand(Ω)\n inflow[t] = current\n end\n return inflow\nend","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"When called with no arguments, it produces a vector of inflows:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"simulator()","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"warning: Warning\nThe simulator must return a Vector{Float64}, so it is limited to a uni-variate random variable. It is possible to do something similar for multi-variate random variable, but you'll have to manually construct the Markov transition matrix, and solution times scale poorly, even in the two-dimensional case.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The next step is to call SDDP.MarkovianGraph with our simulator. This function will attempt to fit a Markov chain to the stochastic process produced by your simulator. There are two key arguments:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"budget is the total number of nodes we want in the Markov chain\nscenarios is a limit on the number of times we can call simulator","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"graph = SDDP.MarkovianGraph(simulator; budget = 8, scenarios = 30)","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"Here we can see we have created a MarkovianGraph with nodes like (2, 59.7). The first element of each node is the stage, and the second element is the inflow.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"Create a SDDP.PolicyGraph using graph as follows:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"model = SDDP.PolicyGraph(\n graph; # <--- New stuff\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, node\n t, inflow = node # <--- New stuff\n @variable(sp, 0 <= x <= 200, SDDP.State, initial_value = 200)\n @variable(sp, g_t >= 0)\n @variable(sp, g_h >= 0)\n @variable(sp, s >= 0)\n @constraint(sp, g_h + g_t == 150)\n c = [50, 100, 150]\n @stageobjective(sp, c[t] * g_t)\n # The new water balance constraint using the node:\n @constraint(sp, x.out == x.in - g_h - s + inflow)\nend","category":"page"},{"location":"tutorial/arma/#When-can-this-trick-be-used?-2","page":"Auto-regressive stochastic processes","title":"When can this trick be used?","text":"","category":"section"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The Markov chain approach should be used when:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The random variable is uni-variate\nThe random variable appears in the objective function or as a variable coefficient in the constraint matrix\nIt's non-trivial to write the stochastic process as a series of constraints (for example, it uses nonlinear terms)\nThe number of nodes is modest (for example, a budget of hundreds, up to perhaps 1000)","category":"page"},{"location":"tutorial/arma/#Vector-auto-regressive-models","page":"Auto-regressive stochastic processes","title":"Vector auto-regressive models","text":"","category":"section"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"The state-space expansion section assumed that the random variable was uni-variate. However, the approach naturally extends to vector auto-regressive models. For example, if inflow is a 2-dimensional vector, then we can model a vector auto-regressive model to it as follows:","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"inflow_t = A times inflow_t-1 + b + varepsilon","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"Here A is a 2-by-2 matrix, and b and varepsilon are 2-by-1 vectors.","category":"page"},{"location":"tutorial/arma/","page":"Auto-regressive stochastic processes","title":"Auto-regressive stochastic processes","text":"model = SDDP.LinearPolicyGraph(;\n stages = 3,\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(sp, 0 <= x <= 200, SDDP.State, initial_value = 200)\n @variable(sp, g_t >= 0)\n @variable(sp, g_h >= 0)\n @variable(sp, s >= 0)\n @constraint(sp, g_h + g_t == 150)\n c = [50, 100, 150]\n @stageobjective(sp, c[t] * g_t)\n # =========================================================================\n # New stuff below Here\n # Add inflow as a state\n @variable(sp, inflow[1:2], SDDP.State, initial_value = 50.0)\n # Add the random variable as a control variable\n @variable(sp, ε[1:2])\n # The equation describing our statistical model\n A = [0.8 0.2; 0.2 0.8]\n @constraint(\n sp,\n [i = 1:2],\n inflow[i].out == sum(A[i, j] * inflow[j].in for j in 1:2) + ε[i],\n )\n # The new water balance constraint using the state variable\n @constraint(sp, x.out == x.in - g_h - s + inflow[1].out + inflow[2].out)\n # Assume we have some empirical residuals:\n Ω₁ = [-10.0, 0.1, 9.6]\n Ω₂ = [-10.0, 0.1, 9.6]\n Ω = [(ω₁, ω₂) for ω₁ in Ω₁ for ω₂ in Ω₂]\n SDDP.parameterize(sp, Ω) do ω\n JuMP.fix(ε[1], ω[1])\n JuMP.fix(ε[2], ω[2])\n return\n end\nend","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"EditURL = \"mdps.jl\"","category":"page"},{"location":"tutorial/mdps/#Example:-Markov-Decision-Processes","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"","category":"section"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"SDDP.jl can be used to solve a variety of Markov Decision processes. If the problem has continuous state and control spaces, and the objective and transition function are convex, then SDDP.jl can find a globally optimal policy. In other cases, SDDP.jl will find a locally optimal policy.","category":"page"},{"location":"tutorial/mdps/#A-simple-example","page":"Example: Markov Decision Processes","title":"A simple example","text":"","category":"section"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"A simple demonstration of this is the example taken from page 98 of the book \"Markov Decision Processes: Discrete stochastic Dynamic Programming\", by Martin L. Putterman.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"The example, as described in Section 4.6.3 of the book, is to minimize a sum of squares of N non-negative variables, subject to a budget constraint that the variable values add up to M. Put mathematically, that is:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"beginaligned\nmin sumlimits_i=1^N x_i^2 \nst sumlimits_i=1^N x_i = M \n x_i ge 0 quad i in 1ldotsN\nendaligned","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"The optimal objective value is M^2N, and the optimal solution is x_i = M N, which can be shown by induction.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"This can be reformulated as a Markov Decision Process by introducing a state variable, s, which tracks the un-spent budget over N stages.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"beginaligned\nV_t(s) = min x^2 + V_t+1(s^prime) \nst s^prime = s - x \n x le s \n x ge 0 \n s ge 0\nendaligned","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"and in the last stage V_N, there is an additional constraint that s^prime = 0.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"The budget of M is computed by solving for V_1(M).","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"info: Info\nSince everything here is continuous and convex, SDDP.jl will find the globally optimal policy.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"If the reformulation from the single problem into the recursive form of the Markov Decision Process is not obvious, consult Putterman's book.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"We can model and solve this problem using SDDP.jl as follows:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"using SDDP\nimport Ipopt\n\nM, N = 5, 3\n\nmodel = SDDP.LinearPolicyGraph(;\n stages = N,\n lower_bound = 0.0,\n optimizer = Ipopt.Optimizer,\n) do subproblem, node\n @variable(subproblem, s >= 0, SDDP.State, initial_value = M)\n @variable(subproblem, x >= 0)\n @stageobjective(subproblem, x^2)\n @constraint(subproblem, x <= s.in)\n @constraint(subproblem, s.out == s.in - x)\n if node == N\n fix(s.out, 0.0; force = true)\n end\n return\nend\n\nSDDP.train(model)","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Check that we got the theoretical optimum:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"SDDP.calculate_bound(model), M^2 / N","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"And check that we found the theoretical value for each x_i:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"simulations = SDDP.simulate(model, 1, [:x])\nfor data in simulations[1]\n println(\"x_$(data[:node_index]) = $(data[:x])\")\nend","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Close enough! We don't get exactly 5/3 because of numerical tolerances within our choice of optimization solver (in this case, Ipopt).","category":"page"},{"location":"tutorial/mdps/#A-more-complicated-policy","page":"Example: Markov Decision Processes","title":"A more complicated policy","text":"","category":"section"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"SDDP.jl is also capable of finding policies for other types of Markov Decision Processes. A classic example of a Markov Decision Process is the problem of finding a path through a maze.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Here's one example of a maze. Try changing the parameters to explore different mazes:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"M, N = 3, 4\ninitial_square = (1, 1)\nreward, illegal_squares, penalties = (3, 4), [(2, 2)], [(3, 1), (2, 4)]\npath = fill(\"⋅\", M, N)\npath[initial_square...] = \"1\"\nfor (k, v) in (illegal_squares => \"▩\", penalties => \"†\", [reward] => \"*\")\n for (i, j) in k\n path[i, j] = v\n end\nend\nprint(join([join(path[i, :], ' ') for i in 1:size(path, 1)], '\\n'))","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Our goal is to get from square 1 to square *. If we step on a †, we incur a penalty of 1. Squares with ▩ are blocked; we cannot move there.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"There are a variety of ways that we can solve this problem. We're going to solve it using a stationary binary stochastic programming formulation.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Our state variable will be a matrix of binary variables x_ij, where each element is 1 if the agent is in the square and 0 otherwise. In each period, we incur a reward of 1 if we are in the reward square and a penalty of -1 if we are in a penalties square. We cannot move to the illegal_squares, so those x_ij = 0. Feasibility between moves is modelled by constraints of the form:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"x^prime_ij le sumlimits_(ab)in P x_ab","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"where P is the set of squares from which it is valid to move from (a, b) to (i, j).","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Because we are looking for a stationary policy, we need a unicyclic graph with a discount factor:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"discount_factor = 0.9\ngraph = SDDP.UnicyclicGraph(discount_factor)","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Then we can formulate our full model:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"import HiGHS\n\nmodel = SDDP.PolicyGraph(\n graph;\n sense = :Max,\n upper_bound = 1 / (1 - discount_factor),\n optimizer = HiGHS.Optimizer,\n) do sp, _\n # Our state is a binary variable for each square\n @variable(\n sp,\n x[i = 1:M, j = 1:N],\n Bin,\n SDDP.State,\n initial_value = (i, j) == initial_square,\n )\n # Can only be in one square at a time\n @constraint(sp, sum(x[i, j].out for i in 1:M, j in 1:N) == 1)\n # Incur rewards and penalties\n @stageobjective(\n sp,\n x[reward...].out - sum(x[i, j].out for (i, j) in penalties)\n )\n # Some squares are illegal\n @constraint(sp, [(i, j) in illegal_squares], x[i, j].out <= 0)\n # Constraints on valid moves\n for i in 1:M, j in 1:N\n moves = [(i - 1, j), (i + 1, j), (i, j), (i, j + 1), (i, j - 1)]\n filter!(v -> 1 <= v[1] <= M && 1 <= v[2] <= N, moves)\n @constraint(sp, x[i, j].out <= sum(x[a, b].in for (a, b) in moves))\n end\n return\nend","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"The upper bound is obtained by assuming that we reach the reward square in one move and stay there.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"warning: Warning\nSince there are discrete decisions here, SDDP.jl is not guaranteed to find the globally optimal policy.","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"SDDP.train(model)","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Simulating a cyclic policy graph requires an explicit sampling_scheme that does not terminate early based on the cycle probability:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"simulations = SDDP.simulate(\n model,\n 1,\n [:x];\n sampling_scheme = SDDP.InSampleMonteCarlo(;\n max_depth = 5,\n terminate_on_dummy_leaf = false,\n ),\n);\nnothing #hide","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"Fill in the path with the time-step in which we visit the square:","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"for (t, data) in enumerate(simulations[1]), i in 1:M, j in 1:N\n if data[:x][i, j].in > 0.5\n path[i, j] = \"$t\"\n end\nend\n\nprint(join([join(path[i, :], ' ') for i in 1:size(path, 1)], '\\n'))","category":"page"},{"location":"tutorial/mdps/","page":"Example: Markov Decision Processes","title":"Example: Markov Decision Processes","text":"tip: Tip\nThis formulation will likely struggle as the number of cells in the maze increases. Can you think of an equivalent formulation that uses fewer state variables?","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"EditURL = \"Hydro_thermal.jl\"","category":"page"},{"location":"examples/Hydro_thermal/#Hydro-thermal-scheduling","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/Hydro_thermal/#Problem-Description","page":"Hydro-thermal scheduling","title":"Problem Description","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"In a hydro-thermal problem, the agent controls a hydro-electric generator and reservoir. Each time period, they need to choose a generation quantity from thermal g_t, and hydro g_h, in order to meet demand w_d, which is a stagewise-independent random variable. The state variable, x, is the quantity of water in the reservoir at the start of each time period, and it has a minimum level of 5 units and a maximum level of 15 units. We assume that there are 10 units of water in the reservoir at the start of time, so that x_0 = 10. The state-variable is connected through time by the water balance constraint: x.out = x.in - g_h - s + w_i, where x.out is the quantity of water at the end of the time period, x.in is the quantity of water at the start of the time period, s is the quantity of water spilled from the reservoir, and w_i is a stagewise-independent random variable that represents the inflow into the reservoir during the time period.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"We assume that there are three stages, t=1, 2, 3, representing summer-fall, winter, and spring, and that we are solving this problem in an infinite-horizon setting with a discount factor of 0.95.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"In each stage, the agent incurs the cost of spillage, plus the cost of thermal generation. We assume that the cost of thermal generation is dependent on the stage t = 1, 2, 3, and that in each stage, w is drawn from the set (w_i, w_d) = {(0, 7.5), (3, 5), (10, 2.5)} with equal probability.","category":"page"},{"location":"examples/Hydro_thermal/#Importing-packages","page":"Hydro-thermal scheduling","title":"Importing packages","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"For this example, in addition to SDDP, we need HiGHS as a solver and Statisitics to compute the mean of our simulations.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"using HiGHS\nusing SDDP\nusing Statistics","category":"page"},{"location":"examples/Hydro_thermal/#Constructing-the-policy-graph","page":"Hydro-thermal scheduling","title":"Constructing the policy graph","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"There are three stages in our infinite-horizon problem, so we construct a unicyclic policy graph using SDDP.UnicyclicGraph:","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"graph = SDDP.UnicyclicGraph(0.95; num_nodes = 3)","category":"page"},{"location":"examples/Hydro_thermal/#Constructing-the-model","page":"Hydro-thermal scheduling","title":"Constructing the model","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"Much of the macro code (i.e., lines starting with @) in the first part of the following should be familiar to users of JuMP.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"Inside the do-end block, sp is a standard JuMP model, and t is an index for the state variable that will be called with t = 1, 2, 3.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"The state variable x, constructed by passing the SDDP.State tag to @variable is actually a Julia struct with two fields: x.in and x.out corresponding to the incoming and outgoing state variables respectively. Both x.in and x.out are standard JuMP variables. The initial_value keyword provides the value of the state variable in the root node (i.e., x_0).","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"Compared to a JuMP model, one key difference is that we use @stageobjective instead of @objective. The SDDP.parameterize function takes a list of supports for w and parameterizes the JuMP model sp by setting the right-hand sides of the appropriate constraints (note how the constraints initially have a right-hand side of 0). By default, it is assumed that the realizations have uniform probability, but a probability mass vector can also be provided.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"model = SDDP.PolicyGraph(\n graph;\n sense = :Min,\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do sp, t\n @variable(sp, 5 <= x <= 15, SDDP.State, initial_value = 10)\n @variable(sp, g_t >= 0)\n @variable(sp, g_h >= 0)\n @variable(sp, s >= 0)\n @constraint(sp, balance, x.out - x.in + g_h + s == 0)\n @constraint(sp, demand, g_h + g_t == 0)\n @stageobjective(sp, s + t * g_t)\n SDDP.parameterize(sp, [[0, 7.5], [3, 5], [10, 2.5]]) do w\n set_normalized_rhs(balance, w[1])\n return set_normalized_rhs(demand, w[2])\n end\nend","category":"page"},{"location":"examples/Hydro_thermal/#Training-the-policy","page":"Hydro-thermal scheduling","title":"Training the policy","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"Once a model has been constructed, the next step is to train the policy. This can be achieved using SDDP.train. There are many options that can be passed, but iteration_limit terminates the training after the prescribed number of SDDP iterations.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"SDDP.train(model; iteration_limit = 100)","category":"page"},{"location":"examples/Hydro_thermal/#Simulating-the-policy","page":"Hydro-thermal scheduling","title":"Simulating the policy","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"After training, we can simulate the policy using SDDP.simulate.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"sims = SDDP.simulate(model, 100, [:g_t])\nmu = round(mean([s[1][:g_t] for s in sims]); digits = 2)\nprintln(\"On average, $(mu) units of thermal are used in the first stage.\")","category":"page"},{"location":"examples/Hydro_thermal/#Extracting-the-water-values","page":"Hydro-thermal scheduling","title":"Extracting the water values","text":"","category":"section"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"Finally, we can use SDDP.ValueFunction and SDDP.evaluate to obtain and evaluate the value function at different points in the state-space. Note that since we are minimizing, the price has a negative sign: each additional unit of water leads to a decrease in the expected long-run cost.","category":"page"},{"location":"examples/Hydro_thermal/","page":"Hydro-thermal scheduling","title":"Hydro-thermal scheduling","text":"V = SDDP.ValueFunction(model[1])\ncost, price = SDDP.evaluate(V; x = 10)","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"EditURL = \"hydro_valley.jl\"","category":"page"},{"location":"examples/hydro_valley/#Hydro-valleys","page":"Hydro valleys","title":"Hydro valleys","text":"","category":"section"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"This problem is a version of the hydro-thermal scheduling problem. The goal is to operate two hydro-dams in a valley chain over time in the face of inflow and price uncertainty.","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"Turbine response curves are modelled by piecewise linear functions which map the flow rate into a power. These can be controlled by specifying the breakpoints in the piecewise linear function as the knots in the Turbine struct.","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"The model can be created using the hydro_valley_model function. It has a few keyword arguments to allow automated testing of the library. hasstagewiseinflows determines if the RHS noise constraint should be added. hasmarkovprice determines if the price uncertainty (modelled by a Markov chain) should be added.","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"In the third stage, the Markov chain has some unreachable states to test some code-paths in the library.","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"We can also set the sense to :Min or :Max (the objective and bound are flipped appropriately).","category":"page"},{"location":"examples/hydro_valley/","page":"Hydro valleys","title":"Hydro valleys","text":"using SDDP, HiGHS, Test, Random\n\nstruct Turbine\n flowknots::Vector{Float64}\n powerknots::Vector{Float64}\nend\n\nstruct Reservoir\n min::Float64\n max::Float64\n initial::Float64\n turbine::Turbine\n spill_cost::Float64\n inflows::Vector{Float64}\nend\n\nfunction hydro_valley_model(;\n hasstagewiseinflows::Bool = true,\n hasmarkovprice::Bool = true,\n sense::Symbol = :Max,\n)\n valley_chain = [\n Reservoir(\n 0,\n 200,\n 200,\n Turbine([50, 60, 70], [55, 65, 70]),\n 1000,\n [0, 20, 50],\n ),\n Reservoir(\n 0,\n 200,\n 200,\n Turbine([50, 60, 70], [55, 65, 70]),\n 1000,\n [0, 0, 20],\n ),\n ]\n\n turbine(i) = valley_chain[i].turbine\n\n # Prices[t, Markov state]\n prices = [\n 1 2 0\n 2 1 0\n 3 4 0\n ]\n\n # Transition matrix\n if hasmarkovprice\n transition =\n Array{Float64,2}[[1.0]', [0.6 0.4], [0.6 0.4 0.0; 0.3 0.7 0.0]]\n else\n transition = [ones(Float64, (1, 1)) for t in 1:3]\n end\n\n flipobj = (sense == :Max) ? 1.0 : -1.0\n lower = (sense == :Max) ? -Inf : -1e6\n upper = (sense == :Max) ? 1e6 : Inf\n\n N = length(valley_chain)\n\n # Initialise SDDP Model\n return m = SDDP.MarkovianPolicyGraph(;\n sense = sense,\n lower_bound = lower,\n upper_bound = upper,\n transition_matrices = transition,\n optimizer = HiGHS.Optimizer,\n ) do subproblem, node\n t, markov_state = node\n\n # ------------------------------------------------------------------\n # SDDP State Variables\n # Level of upper reservoir\n @variable(\n subproblem,\n valley_chain[r].min <= reservoir[r = 1:N] <= valley_chain[r].max,\n SDDP.State,\n initial_value = valley_chain[r].initial\n )\n\n # ------------------------------------------------------------------\n # Additional variables\n @variables(\n subproblem,\n begin\n outflow[r = 1:N] >= 0\n spill[r = 1:N] >= 0\n inflow[r = 1:N] >= 0\n generation_quantity >= 0 # Total quantity of water\n # Proportion of levels to dispatch on\n 0 <=\n dispatch[r = 1:N, level = 1:length(turbine(r).flowknots)] <=\n 1\n rainfall[i = 1:N]\n end\n )\n\n # ------------------------------------------------------------------\n # Constraints\n @constraints(\n subproblem,\n begin\n # flow from upper reservoir\n reservoir[1].out ==\n reservoir[1].in + inflow[1] - outflow[1] - spill[1]\n\n # other flows\n flow[i = 2:N],\n reservoir[i].out ==\n reservoir[i].in + inflow[i] - outflow[i] - spill[i] +\n outflow[i-1] +\n spill[i-1]\n\n # Total quantity generated\n generation_quantity == sum(\n turbine(r).powerknots[level] * dispatch[r, level] for\n r in 1:N for level in 1:length(turbine(r).powerknots)\n )\n\n # ------------------------------------------------------------------\n # Flow out\n turbineflow[r = 1:N],\n outflow[r] == sum(\n turbine(r).flowknots[level] * dispatch[r, level] for\n level in 1:length(turbine(r).flowknots)\n )\n\n # Dispatch combination of levels\n dispatched[r = 1:N],\n sum(\n dispatch[r, level] for\n level in 1:length(turbine(r).flowknots)\n ) <= 1\n end\n )\n\n # rainfall noises\n if hasstagewiseinflows && t > 1 # in future stages random inflows\n @constraint(subproblem, inflow_noise[i = 1:N], inflow[i] <= rainfall[i])\n\n SDDP.parameterize(\n subproblem,\n [\n (valley_chain[1].inflows[i], valley_chain[2].inflows[i]) for i in 1:length(transition)\n ],\n ) do ω\n for i in 1:N\n JuMP.fix(rainfall[i], ω[i])\n end\n end\n else # in the first stage deterministic inflow\n @constraint(\n subproblem,\n initial_inflow_noise[i = 1:N],\n inflow[i] <= valley_chain[i].inflows[1]\n )\n end\n\n # ------------------------------------------------------------------\n # Objective Function\n if hasmarkovprice\n @stageobjective(\n subproblem,\n flipobj * (\n prices[t, markov_state] * generation_quantity -\n sum(valley_chain[i].spill_cost * spill[i] for i in 1:N)\n )\n )\n else\n @stageobjective(\n subproblem,\n flipobj * (\n prices[t, 1] * generation_quantity -\n sum(valley_chain[i].spill_cost * spill[i] for i in 1:N)\n )\n )\n end\n end\nend\n\nfunction test_hydro_valley_model()\n\n # For repeatability\n Random.seed!(11111)\n\n # deterministic\n deterministic_model = hydro_valley_model(;\n hasmarkovprice = false,\n hasstagewiseinflows = false,\n )\n SDDP.train(\n deterministic_model;\n iteration_limit = 10,\n cut_deletion_minimum = 1,\n print_level = 0,\n )\n @test SDDP.calculate_bound(deterministic_model) ≈ 835.0 atol = 1e-3\n\n # stagewise inflows\n stagewise_model = hydro_valley_model(; hasmarkovprice = false)\n SDDP.train(stagewise_model; iteration_limit = 20, print_level = 0)\n @test SDDP.calculate_bound(stagewise_model) ≈ 838.33 atol = 1e-2\n\n # Markov prices\n markov_model = hydro_valley_model(; hasstagewiseinflows = false)\n SDDP.train(markov_model; iteration_limit = 10, print_level = 0)\n @test SDDP.calculate_bound(markov_model) ≈ 851.8 atol = 1e-2\n\n # stagewise inflows and Markov prices\n markov_stagewise_model =\n hydro_valley_model(; hasstagewiseinflows = true, hasmarkovprice = true)\n SDDP.train(markov_stagewise_model; iteration_limit = 10, print_level = 0)\n @test SDDP.calculate_bound(markov_stagewise_model) ≈ 855.0 atol = 1.0\n\n # risk averse stagewise inflows and Markov prices\n riskaverse_model = hydro_valley_model()\n SDDP.train(\n riskaverse_model;\n risk_measure = SDDP.EAVaR(; lambda = 0.5, beta = 0.66),\n iteration_limit = 10,\n print_level = 0,\n )\n @test SDDP.calculate_bound(riskaverse_model) ≈ 828.157 atol = 1.0\n\n # stagewise inflows and Markov prices\n worst_case_model = hydro_valley_model(; sense = :Min)\n SDDP.train(\n worst_case_model;\n risk_measure = SDDP.EAVaR(; lambda = 0.5, beta = 0.0),\n iteration_limit = 10,\n print_level = 0,\n )\n @test SDDP.calculate_bound(worst_case_model) ≈ -780.867 atol = 1.0\n\n # stagewise inflows and Markov prices\n cutselection_model = hydro_valley_model()\n SDDP.train(\n cutselection_model;\n iteration_limit = 10,\n print_level = 0,\n cut_deletion_minimum = 2,\n )\n @test SDDP.calculate_bound(cutselection_model) ≈ 855.0 atol = 1.0\n\n # Distributionally robust Optimization\n dro_model = hydro_valley_model(; hasmarkovprice = false)\n SDDP.train(\n dro_model;\n risk_measure = SDDP.ModifiedChiSquared(sqrt(2 / 3) - 1e-6),\n iteration_limit = 10,\n print_level = 0,\n )\n @test SDDP.calculate_bound(dro_model) ≈ 835.0 atol = 1.0\n\n dro_model = hydro_valley_model(; hasmarkovprice = false)\n SDDP.train(\n dro_model;\n risk_measure = SDDP.ModifiedChiSquared(1 / 6),\n iteration_limit = 20,\n print_level = 0,\n )\n @test SDDP.calculate_bound(dro_model) ≈ 836.695 atol = 1.0\n # (Note) radius ≈ sqrt(2/3), will set all noise probabilities to zero except the worst case noise\n # (Why?):\n # The distance from the uniform distribution (the assumed \"true\" distribution)\n # to a corner of a unit simplex is sqrt(S-1)/sqrt(S) if we have S scenarios. The corner\n # of a unit simplex is just a unit vector, i.e.: [0 ... 0 1 0 ... 0]. With this probability\n # vector, only one noise has a non-zero probablity.\n # In the worst case rhsnoise (0 inflows) the profit is:\n # Reservoir1: 70 * $3 + 70 * $2 + 65 * $1 +\n # Reservoir2: 70 * $3 + 70 * $2 + 70 * $1\n ### = $835\nend\n\ntest_hydro_valley_model()","category":"page"},{"location":"guides/add_noise_in_the_constraint_matrix/#Add-noise-in-the-constraint-matrix","page":"Add noise in the constraint matrix","title":"Add noise in the constraint matrix","text":"","category":"section"},{"location":"guides/add_noise_in_the_constraint_matrix/","page":"Add noise in the constraint matrix","title":"Add noise in the constraint matrix","text":"DocTestSetup = quote\n using SDDP, HiGHS\nend","category":"page"},{"location":"guides/add_noise_in_the_constraint_matrix/","page":"Add noise in the constraint matrix","title":"Add noise in the constraint matrix","text":"SDDP.jl supports coefficients in the constraint matrix through the JuMP.set_normalized_coefficient function.","category":"page"},{"location":"guides/add_noise_in_the_constraint_matrix/","page":"Add noise in the constraint matrix","title":"Add noise in the constraint matrix","text":"julia> model = SDDP.LinearPolicyGraph(\n stages=3, lower_bound = 0, optimizer = HiGHS.Optimizer\n ) do subproblem, t\n @variable(subproblem, x, SDDP.State, initial_value = 0.0)\n @constraint(subproblem, emissions, 1x.out <= 1)\n SDDP.parameterize(subproblem, [0.2, 0.5, 1.0]) do ω\n JuMP.set_normalized_coefficient(emissions, x.out, ω)\n println(emissions)\n end\n @stageobjective(subproblem, -x.out)\n end\nA policy graph with 3 nodes.\n Node indices: 1, 2, 3\n\njulia> SDDP.simulate(model, 1);\nemissions : x_out <= 1\nemissions : 0.2 x_out <= 1\nemissions : 0.5 x_out <= 1","category":"page"},{"location":"guides/add_noise_in_the_constraint_matrix/","page":"Add noise in the constraint matrix","title":"Add noise in the constraint matrix","text":"note: Note\nJuMP will normalize constraints by moving all variables to the left-hand side. Thus, @constraint(model, 0 <= 1 - x.out) becomes x.out <= 1. JuMP.set_normalized_coefficient sets the coefficient on the normalized constraint.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"EditURL = \"risk.jl\"","category":"page"},{"location":"explanation/risk/#Risk-aversion","page":"Risk aversion","title":"Risk aversion","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"In Introductory theory, we implemented a basic version of the SDDP algorithm. This tutorial extends that implementation to add risk-aversion.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Packages","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"This tutorial uses the following packages. For clarity, we call import PackageName so that we must prefix PackageName. to all functions and structs provided by that package. Everything not prefixed is either part of base Julia, or we wrote it.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"import ForwardDiff\nimport HiGHS\nimport Ipopt\nimport JuMP\nimport Statistics","category":"page"},{"location":"explanation/risk/#Risk-aversion:-what-and-why?","page":"Risk aversion","title":"Risk aversion: what and why?","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Often, the agents making decisions in complex systems are risk-averse, that is, they care more about avoiding very bad outcomes, than they do about having a good average outcome.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"As an example, consumers in a hydro-thermal problem may be willing to pay a slightly higher electricity price on average, if it means that there is a lower probability of blackouts.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Risk aversion in multistage stochastic programming has been well studied in the academic literature, and is widely used in production implementations around the world.","category":"page"},{"location":"explanation/risk/#Risk-measures","page":"Risk aversion","title":"Risk measures","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"One way to add risk aversion to models is to use a risk measure. A risk measure is a function that maps a random variable to a real number.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"You are probably already familiar with lots of different risk measures. For example, the mean, median, mode, and maximum are all risk measures.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We call the act of applying a risk measure to a random variable \"computing the risk\" of a random variable.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"To keep things simple, and because we need it for SDDP, we restrict our attention to random variables Z with a finite sample space Omega and positive probabilities p_omega for all omega in Omega. We denote the realizations of Z by Z(omega) = z_omega.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"A risk measure, mathbbFZ, is a convex risk measure if it satisfies the following axioms:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Axiom 1: monotonicity","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Given two random variables Z_1 and Z_2, with Z_1 le Z_2 almost surely, then mathbbFZ_1 le FZ_2.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Axiom 2: translation equivariance","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Given two random variables Z_1 and Z_2, then for all a in mathbbR, mathbbFZ + a = mathbbFZ + a.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Axiom 3: convexity","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Given two random variables Z_1 and Z_2, then for all a in 0 1,","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbFa Z_1 + (1 - a) Z_2 le a mathbbFZ_1 + (1-a)mathbbFZ_2","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Now we know what a risk measure is, let's see how we can use them to form risk-averse decision rules.","category":"page"},{"location":"explanation/risk/#Risk-averse-decision-rules:-Part-I","page":"Risk aversion","title":"Risk-averse decision rules: Part I","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We started this tutorial by explaining that we are interested in risk aversion because some agents are risk-averse. What that really means, is that they want a policy that is also risk-averse. The question then becomes, how do we create risk-averse decision rules and policies?","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Recall from Introductory theory that we can form an optimal decision rule using the recursive formulation:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"beginaligned\nV_i(x omega) = minlimits_barx x^prime u C_i(barx u omega) + mathbbE_j in i^+ varphi in Omega_jV_j(x^prime varphi)\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x\nendaligned","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"where our decision rule, pi_i(x omega), solves this optimization problem and returns a u^* corresponding to an optimal solution.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"If we can replace the expectation operator mathbbE with another (more risk-averse) risk measure mathbbF, then our decision rule will attempt to choose a control decision now that minimizes the risk of the future costs, as opposed to the expectation of the future costs. This makes our decisions more risk-averse, because we care more about the worst outcomes than we do about the average.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Therefore, we can form a risk-averse decision rule using the formulation:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"beginaligned\nV_i(x omega) = minlimits_barx x^prime u C_i(barx u omega) + mathbbF_j in i^+ varphi in Omega_jV_j(x^prime varphi)\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x\nendaligned","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"To convert this problem into a tractable equivalent, we apply Kelley's algorithm to the risk-averse cost-to-go term mathbbF_j in i^+ varphi in Omega_jV_j(x^prime varphi), to obtain the approximated problem:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"beginaligned\nV_i^K(x omega) = minlimits_barx x^prime u C_i(barx u omega) + theta\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x \n theta ge mathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right + fracddx^primemathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right^top (x^prime - x^prime_k)quad k=1ldotsK\nendaligned","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"warning: Warning\nNote how we need to explicitly compute a risk-averse subgradient! (We need a subgradient because the function might not be differentiable.) When constructing cuts with the expectation operator in Introductory theory, we implicitly used the law of total expectation to combine the two expectations; we can't do that for a general risk measure.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"tip: Homework challenge\nIf it's not obvious why we can use Kelley's here, try to use the axioms of a convex risk measure to show that mathbbF_j in i^+ varphi in Omega_jV_j(x^prime varphi) is a convex function w.r.t. x^prime if V_j is also a convex function.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Our challenge is now to find a way to compute the risk-averse cost-to-go function mathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right, and a way to compute a subgradient of the risk-averse cost-to-go function with respect to x^prime.","category":"page"},{"location":"explanation/risk/#Primal-risk-measures","page":"Risk aversion","title":"Primal risk measures","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Now we know what a risk measure is, and how we will use it, let's implement some code to see how we can compute the risk of some random variables.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"note: Note\nWe're going to start by implementing the primal version of each risk measure. We implement the dual version in the next section.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"First, we need some data:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Z = [1.0, 2.0, 3.0, 4.0]","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"with probabilities:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"p = [0.1, 0.2, 0.4, 0.3]","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We're going to implement a number of different risk measures, so to leverage Julia's multiple dispatch, we create an abstract type:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"abstract type AbstractRiskMeasure end","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"and function to overload:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"\"\"\"\n primal_risk(F::AbstractRiskMeasure, Z::Vector{<:Real}, p::Vector{Float64})\n\nUse `F` to compute the risk of the random variable defined by a vector of costs\n`Z` and non-zero probabilities `p`.\n\"\"\"\nfunction primal_risk end","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"note: Note\nWe want Vector{<:Real} instead of Vector{Float64} because we're going to automatically differentiate this function in the next section.","category":"page"},{"location":"explanation/risk/#Expectation","page":"Risk aversion","title":"Expectation","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The expectation, mathbbE, also called the mean or the average, is the most widely used convex risk measure. The expectation of a random variable is just the sum of Z weighted by the probability:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbFZ = mathbbE_pZ = sumlimits_omegainOmega p_omega z_omega","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"struct Expectation <: AbstractRiskMeasure end\n\nfunction primal_risk(::Expectation, Z::Vector{<:Real}, p::Vector{Float64})\n return sum(p[i] * Z[i] for i in 1:length(p))\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Let's try it out:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"primal_risk(Expectation(), Z, p)","category":"page"},{"location":"explanation/risk/#WorstCase","page":"Risk aversion","title":"WorstCase","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The worst-case risk measure, also called the maximum, is another widely used convex risk measure. This risk measure doesn't care about the probability vector p, only the cost vector Z:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbFZ = maxZ = maxlimits_omegainOmega z_omega","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"struct WorstCase <: AbstractRiskMeasure end\n\nfunction primal_risk(::WorstCase, Z::Vector{<:Real}, ::Vector{Float64})\n return maximum(Z)\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Let's try it out:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"primal_risk(WorstCase(), Z, p)","category":"page"},{"location":"explanation/risk/#Entropic","page":"Risk aversion","title":"Entropic","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"A more interesting, and less widely used risk measure is the entropic risk measure. The entropic risk measure is parameterized by a value gamma 0, and computes the risk of a random variable as:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbF_gammaZ = frac1gammalogleft(mathbbE_pe^gamma Zright) = frac1gammalogleft(sumlimits_omegainOmegap_omega e^gamma z_omegaright)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"tip: Homework challenge\nProve that the entropic risk measure satisfies the three axioms of a convex risk measure.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"struct Entropic <: AbstractRiskMeasure\n γ::Float64\n function Entropic(γ)\n if !(γ > 0)\n throw(DomainError(γ, \"Entropic risk measure must have γ > 0.\"))\n end\n return new(γ)\n end\nend\n\nfunction primal_risk(F::Entropic, Z::Vector{<:Real}, p::Vector{Float64})\n return 1 / F.γ * log(sum(p[i] * exp(F.γ * Z[i]) for i in 1:length(p)))\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"warning: Warning\nexp(x) overflows when x 709. Therefore, if we are passed a vector of Float64, use arbitrary precision arithmetic with big.(Z).","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function primal_risk(F::Entropic, Z::Vector{Float64}, p::Vector{Float64})\n return Float64(primal_risk(F, big.(Z), p))\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Let's try it out for different values of gamma:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"for γ in [0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1_000.0]\n println(\"γ = $(γ), F[Z] = \", primal_risk(Entropic(γ), Z, p))\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"info: Info\nThe entropic has two extremes. As gamma rightarrow 0, the entropic acts like the expectation risk measure, and as gamma rightarrow infty, the entropic acts like the worst-case risk measure.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Computing risk measures this way works well for computing the primal value. However, there isn't an obvious way to compute a subgradient of the risk-averse cost-to-go function, which we need for our cut calculation.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"There is a nice solution to this problem, and that is to use the dual representation of a risk measure, instead of the primal.","category":"page"},{"location":"explanation/risk/#Dual-risk-measures","page":"Risk aversion","title":"Dual risk measures","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Convex risk measures have a dual representation as follows:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbFZ = suplimits_q inmathcalM(p) mathbbE_qZ - alpha(p q)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"where alpha is a concave function that maps the probability vectors p and q to a real number, and mathcalM(p) subseteq mathcalP is a convex subset of the probability simplex:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathcalP = p ge 0sumlimits_omegainOmegap_omega = 1","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The dual of a convex risk measure can be interpreted as taking the expectation of the random variable Z with respect to the worst probability vector q that lies within the set mathcalM, less some concave penalty term alpha(p q).","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"If we define a function dual_risk_inner that computes q and α:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"\"\"\"\n dual_risk_inner(\n F::AbstractRiskMeasure, Z::Vector{Float64}, p::Vector{Float64}\n )::Tuple{Vector{Float64},Float64}\n\nReturn a tuple formed by the worst-case probability vector `q` and the\ncorresponding evaluation `α(p, q)`.\n\"\"\"\nfunction dual_risk_inner end","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"then we can write a generic dual_risk function as:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function dual_risk(\n F::AbstractRiskMeasure,\n Z::Vector{Float64},\n p::Vector{Float64},\n)\n q, α = dual_risk_inner(F, Z, p)\n return sum(q[i] * Z[i] for i in 1:length(q)) - α\nend","category":"page"},{"location":"explanation/risk/#Expectation-2","page":"Risk aversion","title":"Expectation","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"For the expectation risk measure, mathcalM(p) = p, and alpha(cdot cdot) = 0. Therefore:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function dual_risk_inner(::Expectation, ::Vector{Float64}, p::Vector{Float64})\n return p, 0.0\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We can check we get the same result as the primal version:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"dual_risk(Expectation(), Z, p) == primal_risk(Expectation(), Z, p)","category":"page"},{"location":"explanation/risk/#Worst-case","page":"Risk aversion","title":"Worst-case","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"For the worst-case risk measure, mathcalM(p) = mathcalP, and alpha(cdot cdot) = 0. Therefore, the dual representation just puts all of the probability weight on the maximum outcome:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function dual_risk_inner(::WorstCase, Z::Vector{Float64}, ::Vector{Float64})\n q = zeros(length(Z))\n _, index = findmax(Z)\n q[index] = 1.0\n return q, 0.0\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We can check we get the same result as the primal version:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"dual_risk(WorstCase(), Z, p) == primal_risk(WorstCase(), Z, p)","category":"page"},{"location":"explanation/risk/#Entropic-2","page":"Risk aversion","title":"Entropic","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"For the entropic risk measure, mathcalM(p) = mathcalP, and:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"alpha(p q) = frac1gammasumlimits_omegainOmega q_omega logleft(fracq_omegap_omegaright)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"One way to solve the dual problem is to explicitly solve a nonlinear optimization problem:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function dual_risk_inner(F::Entropic, Z::Vector{Float64}, p::Vector{Float64})\n N = length(p)\n model = JuMP.Model(Ipopt.Optimizer)\n JuMP.set_silent(model)\n # For this problem, the solve is more accurate if we turn off problem\n # scaling.\n JuMP.set_optimizer_attribute(model, \"nlp_scaling_method\", \"none\")\n JuMP.@variable(model, 0 <= q[1:N] <= 1)\n JuMP.@constraint(model, sum(q) == 1)\n JuMP.@NLexpression(\n model,\n α,\n 1 / F.γ * sum(q[i] * log(q[i] / p[i]) for i in 1:N),\n )\n JuMP.@NLobjective(model, Max, sum(q[i] * Z[i] for i in 1:N) - α)\n JuMP.optimize!(model)\n return JuMP.value.(q), JuMP.value(α)\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We can check we get the same result as the primal version:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"for γ in [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]\n primal = primal_risk(Entropic(γ), Z, p)\n dual = dual_risk(Entropic(γ), Z, p)\n success = primal ≈ dual ? \"✓\" : \"×\"\n println(\"$(success) γ = $(γ), primal = $(primal), dual = $(dual)\")\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"info: Info\nThis method of solving the dual problem \"on-the-side\" is used by SDDP.jl for a number of risk measures, including a distributionally robust risk measure with the Wasserstein distance. Check out all the risk measures that SDDP.jl supports in Add a risk measure.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The \"on-the-side\" method is very general, and it lets us incorporate any convex risk measure into SDDP. However, this comes at an increased computational cost and potential numerical issues (e.g., not converging to the exact solution).","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"However, for the entropic risk measure, Dowson, Morton, and Pagnoncelli (2020) derive the following closed form solution for q^*:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"q_omega^* = fracp_omega e^gamma z_omegasumlimits_varphi in Omega p_varphi e^gamma z_varphi","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"This is faster because we don't need to use Ipopt, and it avoids some of the numerical issues associated with solving a nonlinear program.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function dual_risk_inner(F::Entropic, Z::Vector{Float64}, p::Vector{Float64})\n q, α = zeros(length(p)), big(0.0)\n peγz = p .* exp.(F.γ .* big.(Z))\n sum_peγz = sum(peγz)\n for i in 1:length(q)\n big_q = peγz[i] / sum_peγz\n α += big_q * log(big_q / p[i])\n q[i] = Float64(big_q)\n end\n return q, Float64(α / F.γ)\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"warning: Warning\nAgain, note that we use big to avoid introducing overflow errors, before explicitly casting back to Float64 for the values we return.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We can check we get the same result as the primal version:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"for γ in [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]\n primal = primal_risk(Entropic(γ), Z, p)\n dual = dual_risk(Entropic(γ), Z, p)\n success = primal ≈ dual ? \"✓\" : \"×\"\n println(\"$(success) γ = $(γ), primal = $(primal), dual = $(dual)\")\nend","category":"page"},{"location":"explanation/risk/#Risk-averse-subgradients","page":"Risk aversion","title":"Risk-averse subgradients","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We ended the section on primal risk measures by explaining how we couldn't use the primal risk measure in the cut calculation because we needed some way of computing a risk-averse subgradient:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"theta ge mathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right + fracddx^primemathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right^top (x^prime - x^prime_k)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The reason we use the dual representation is because of the following theorem, which explains how to compute a risk-averse gradient.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"info: The risk-averse subgradient theorem\nLet omega in Omega index a random vector with finite support and with nominal probability mass function, p in mathcalP, which satisfies p 0.Consider a convex risk measure, mathbbF, with a convex risk set, mathcalM(p), so that mathbbF can be expressed as the dual form.Let V(xomega) be convex with respect to x for all fixed omegainOmega, and let lambda(tildex omega) be a subgradient of V(xomega) with respect to x at x = tildex for each omega in Omega.Then, sum_omegainOmegaq^*_omega lambda(tildexomega) is a subgradient of mathbbFV(xomega) at tildex, whereq^* in argmax_q in mathcalM(p)leftmathbbE_qV(tildexomega) - alpha(p q)right","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"This theorem can be a little hard to unpack, so let's see an example:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function dual_risk_averse_subgradient(\n V::Function,\n # Use automatic differentiation to compute the gradient of V w.r.t. x,\n # given a fixed ω.\n λ::Function = (x, ω) -> ForwardDiff.gradient(x -> V(x, ω), x);\n F::AbstractRiskMeasure,\n Ω::Vector,\n p::Vector{Float64},\n x̃::Vector{Float64},\n)\n # Evaluate the function at x=x̃ for all ω ∈ Ω.\n V_ω = [V(x̃, ω) for ω in Ω]\n # Solve the dual problem to obtain an optimal q^*.\n q, α = dual_risk_inner(F, V_ω, p)\n # Compute the risk-averse subgradient by taking the expectation of the\n # subgradients w.r.t. q^*.\n dVdx = sum(q[i] * λ(x̃, ω) for (i, ω) in enumerate(Ω))\n return dVdx\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We can compare the subgradient obtained with the dual form against the automatic differentiation of the primal_risk function.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function primal_risk_averse_subgradient(\n V::Function;\n F::AbstractRiskMeasure,\n Ω::Vector,\n p::Vector{Float64},\n x̃::Vector{Float64},\n)\n inner(x) = primal_risk(F, [V(x, ω) for ω in Ω], p)\n return ForwardDiff.gradient(inner, x̃)\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"As our example function, we use:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"V(x, ω) = ω * x[1]^2","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"with:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Ω = [1.0, 2.0, 3.0]","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"and:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"p = [0.3, 0.4, 0.3]","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"at the point:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"x̃ = [3.0]","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"If mathbbF is the expectation risk-measure, then:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbFV(x omega) = 2 x^2","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The function evaluation x=3 is 18 and the subgradient is 12. Let's check we get it right with the dual form:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"dual_risk_averse_subgradient(V; F = Expectation(), Ω = Ω, p = p, x̃ = x̃)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"and the primal form:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"primal_risk_averse_subgradient(V; F = Expectation(), Ω = Ω, p = p, x̃ = x̃)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"If mathbbF is the worst-case risk measure, then:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"mathbbFV(x omega) = 3 x^2","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The function evaluation at x=3 is 27, and the subgradient is 18. Let's check we get it right with the dual form:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"dual_risk_averse_subgradient(V; F = WorstCase(), Ω = Ω, p = p, x̃ = x̃)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"and the primal form:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"primal_risk_averse_subgradient(V; F = WorstCase(), Ω = Ω, p = p, x̃ = x̃)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"If mathbbF is the entropic risk measure, the math is a little more difficult to derive analytically. However, we can check against our primal version:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"for γ in [0.001, 0.01, 0.1, 1.0, 10.0, 100.0]\n dual =\n dual_risk_averse_subgradient(V; F = Entropic(γ), Ω = Ω, p = p, x̃ = x̃)\n primal = primal_risk_averse_subgradient(\n V;\n F = Entropic(γ),\n Ω = Ω,\n p = p,\n x̃ = x̃,\n )\n success = primal ≈ dual ? \"✓\" : \"×\"\n println(\"$(success) γ = $(γ), primal = $(primal), dual = $(dual)\")\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Uh oh! What happened with the last line? It looks our primal_risk_averse_subgradient encountered an error and returned a subgradient of NaN. This is because of the overflow issue with exp(x). However, we can be confident that our dual method of computing the risk-averse subgradient is both correct and more numerically robust than the primal version.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"info: Info\nAs another sanity check, notice how as gamma rightarrow 0, we tend toward the solution of the expectation risk-measure [12], and as gamma rightarrow infty, we tend toward the solution of the worse-case risk measure [18].","category":"page"},{"location":"explanation/risk/#Risk-averse-decision-rules:-Part-II","page":"Risk aversion","title":"Risk-averse decision rules: Part II","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Why is the risk-averse subgradient theorem helpful? Using the dual representation of a convex risk measure, we can re-write the cut:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"theta ge mathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right + fracddx^primemathbbF_j in i^+ varphi in Omega_jleftV_j^k(x^prime_k varphi)right^top (x^prime - x^prime_k)quad k=1ldotsK","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"as:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"theta ge mathbbE_q_kleftV_j^k(x^prime_k varphi) + fracddx^primeV_j^k(x^prime_k varphi)^top (x^prime - x^prime_k)right - alpha(p q_k)quad k=1ldotsK","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"where q_k = mathrmargsuplimits_q inmathcalM(p) mathbbE_qV_j^k(x_k^prime varphi) - alpha(p q).","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Therefore, we can formulate a risk-averse decision rule as:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"beginaligned\nV_i^K(x omega) = minlimits_barx x^prime u C_i(barx u omega) + theta\n x^prime = T_i(barx u omega) \n u in U_i(barx omega) \n barx = x \n theta ge mathbbE_q_kleftV_j^k(x^prime_k varphi) + fracddx^primeV_j^k(x^prime_k varphi)^top (x^prime - x^prime_k)right - alpha(p q_k)quad k=1ldotsK \n theta ge M\nendaligned","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"where q_k = mathrmargsuplimits_q inmathcalM(p) mathbbE_qV_j^k(x_k^prime varphi) - alpha(p q).","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Thus, to implement risk-averse SDDP, all we need to do is modify the backward pass to include this calculation of q_k, form the cut using q_k instead of p, and subtract the penalty term alpha(p q_k).","category":"page"},{"location":"explanation/risk/#Implementation","page":"Risk aversion","title":"Implementation","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Now we're ready to implement our risk-averse version of SDDP.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"As a prerequisite, we need most of the code from Introductory theory.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"

\nClick to view code from the tutorial \"Introductory theory\".","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"struct State\n in::JuMP.VariableRef\n out::JuMP.VariableRef\nend\n\nstruct Uncertainty\n parameterize::Function\n Ω::Vector{Any}\n P::Vector{Float64}\nend\n\nstruct Node\n subproblem::JuMP.Model\n states::Dict{Symbol,State}\n uncertainty::Uncertainty\n cost_to_go::JuMP.VariableRef\nend\n\nstruct PolicyGraph\n nodes::Vector{Node}\n arcs::Vector{Dict{Int,Float64}}\nend\n\nfunction Base.show(io::IO, model::PolicyGraph)\n println(io, \"A policy graph with $(length(model.nodes)) nodes\")\n println(io, \"Arcs:\")\n for (from, arcs) in enumerate(model.arcs)\n for (to, probability) in arcs\n println(io, \" $(from) => $(to) w.p. $(probability)\")\n end\n end\n return\nend\n\nfunction PolicyGraph(\n subproblem_builder::Function;\n graph::Vector{Dict{Int,Float64}},\n lower_bound::Float64,\n optimizer,\n)\n nodes = Node[]\n for t in 1:length(graph)\n model = JuMP.Model(optimizer)\n states, uncertainty = subproblem_builder(model, t)\n JuMP.@variable(model, cost_to_go >= lower_bound)\n obj = JuMP.objective_function(model)\n JuMP.@objective(model, Min, obj + cost_to_go)\n if length(graph[t]) == 0\n JuMP.fix(cost_to_go, 0.0; force = true)\n end\n push!(nodes, Node(model, states, uncertainty, cost_to_go))\n end\n return PolicyGraph(nodes, graph)\nend\n\nfunction sample_uncertainty(uncertainty::Uncertainty)\n r = rand()\n for (p, ω) in zip(uncertainty.P, uncertainty.Ω)\n r -= p\n if r < 0.0\n return ω\n end\n end\n return error(\"We should never get here because P should sum to 1.0.\")\nend\n\nfunction sample_next_node(model::PolicyGraph, current::Int)\n if length(model.arcs[current]) == 0\n return nothing\n else\n r = rand()\n for (to, probability) in model.arcs[current]\n r -= probability\n if r < 0.0\n return to\n end\n end\n return nothing\n end\nend\n\nfunction forward_pass(model::PolicyGraph, io::IO = stdout)\n incoming_state =\n Dict(k => JuMP.fix_value(v.in) for (k, v) in model.nodes[1].states)\n simulation_cost = 0.0\n trajectory = Tuple{Int,Dict{Symbol,Float64}}[]\n t = 1\n while t !== nothing\n node = model.nodes[t]\n ω = sample_uncertainty(node.uncertainty)\n node.uncertainty.parameterize(ω)\n for (k, v) in incoming_state\n JuMP.fix(node.states[k].in, v; force = true)\n end\n JuMP.optimize!(node.subproblem)\n if JuMP.termination_status(node.subproblem) != JuMP.MOI.OPTIMAL\n error(\"Something went terribly wrong!\")\n end\n outgoing_state = Dict(k => JuMP.value(v.out) for (k, v) in node.states)\n stage_cost =\n JuMP.objective_value(node.subproblem) - JuMP.value(node.cost_to_go)\n simulation_cost += stage_cost\n incoming_state = outgoing_state\n push!(trajectory, (t, outgoing_state))\n t = sample_next_node(model, t)\n end\n return trajectory, simulation_cost\nend\n\nfunction upper_bound(model::PolicyGraph; replications::Int)\n simulations = [forward_pass(model, devnull) for i in 1:replications]\n z = [s[2] for s in simulations]\n μ = Statistics.mean(z)\n tσ = 1.96 * Statistics.std(z) / sqrt(replications)\n return μ, tσ\nend\n\nfunction lower_bound(model::PolicyGraph)\n node = model.nodes[1]\n bound = 0.0\n for (p, ω) in zip(node.uncertainty.P, node.uncertainty.Ω)\n node.uncertainty.parameterize(ω)\n JuMP.optimize!(node.subproblem)\n bound += p * JuMP.objective_value(node.subproblem)\n end\n return bound\nend\n\nfunction evaluate_policy(\n model::PolicyGraph;\n node::Int,\n incoming_state::Dict{Symbol,Float64},\n random_variable,\n)\n the_node = model.nodes[node]\n the_node.uncertainty.parameterize(random_variable)\n for (k, v) in incoming_state\n JuMP.fix(the_node.states[k].in, v; force = true)\n end\n JuMP.optimize!(the_node.subproblem)\n return Dict(\n k => JuMP.value.(v) for\n (k, v) in JuMP.object_dictionary(the_node.subproblem)\n )\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"

","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"First, we need to modify the backward pass to compute the cuts using the risk-averse subgradient theorem:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function backward_pass(\n model::PolicyGraph,\n trajectory::Vector{Tuple{Int,Dict{Symbol,Float64}}},\n io::IO = stdout;\n risk_measure::AbstractRiskMeasure,\n)\n println(io, \"| Backward pass\")\n for i in reverse(1:length(trajectory))\n index, outgoing_states = trajectory[i]\n node = model.nodes[index]\n println(io, \"| | Visiting node $(index)\")\n if length(model.arcs[index]) == 0\n continue\n end\n # =====================================================================\n # New! Create vectors to store the cut expressions, V(x,ω) and p:\n cut_expressions, V_ω, p = JuMP.AffExpr[], Float64[], Float64[]\n # =====================================================================\n for (j, P_ij) in model.arcs[index]\n next_node = model.nodes[j]\n for (k, v) in outgoing_states\n JuMP.fix(next_node.states[k].in, v; force = true)\n end\n for (pφ, φ) in zip(next_node.uncertainty.P, next_node.uncertainty.Ω)\n next_node.uncertainty.parameterize(φ)\n JuMP.optimize!(next_node.subproblem)\n V = JuMP.objective_value(next_node.subproblem)\n dVdx = Dict(\n k => JuMP.reduced_cost(v.in) for (k, v) in next_node.states\n )\n # =============================================================\n # New! Construct and append the expression\n # `V_j^K(x_k, φ) + dVdx_j^K(x'_k, φ)ᵀ(x - x_k)` to the list of\n # cut expressions.\n push!(\n cut_expressions,\n JuMP.@expression(\n node.subproblem,\n V + sum(\n dVdx[k] * (x.out - outgoing_states[k]) for\n (k, x) in node.states\n ),\n )\n )\n # Add the objective value to Z:\n push!(V_ω, V)\n # Add the probability to p:\n push!(p, P_ij * pφ)\n # =============================================================\n end\n end\n # =====================================================================\n # New! Using the solutions in V_ω, compute q and α:\n q, α = dual_risk_inner(risk_measure, V_ω, p)\n println(io, \"| | | Z = \", Z)\n println(io, \"| | | p = \", p)\n println(io, \"| | | q = \", q)\n println(io, \"| | | α = \", α)\n # Then add the cut:\n c = JuMP.@constraint(\n node.subproblem,\n node.cost_to_go >=\n sum(q[i] * cut_expressions[i] for i in 1:length(q)) - α\n )\n # =====================================================================\n println(io, \"| | | Adding cut : \", c)\n end\n return nothing\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We also need to update the train loop of SDDP to pass a risk measure to the backward pass:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"function train(\n model::PolicyGraph;\n iteration_limit::Int,\n replications::Int,\n # =========================================================================\n # New! Add a risk_measure argument\n risk_measure::AbstractRiskMeasure,\n # =========================================================================\n io::IO = stdout,\n)\n for i in 1:iteration_limit\n println(io, \"Starting iteration $(i)\")\n outgoing_states, _ = forward_pass(model, io)\n # =====================================================================\n # New! Pass the risk measure to the backward pass.\n backward_pass(model, outgoing_states, io; risk_measure = risk_measure)\n # =====================================================================\n println(io, \"| Finished iteration\")\n println(io, \"| | lower_bound = \", lower_bound(model))\n end\n μ, tσ = upper_bound(model; replications = replications)\n println(io, \"Upper bound = $(μ) ± $(tσ)\")\n return\nend","category":"page"},{"location":"explanation/risk/#Risk-averse-bounds","page":"Risk aversion","title":"Risk-averse bounds","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"warning: Warning\nThis section is important.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"When we had a risk-neutral policy (i.e., we only used the expectation risk measure), we discussed how we could form valid lower and upper bounds.","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"The upper bound is still valid as a Monte Carlo simulation of the expected cost of the policy. (Although this upper bound doesn't capture the change in the policy we wanted to achieve, namely that the impact of the worst outcomes were reduced.)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"However, if we use a different risk measure, the lower bound is no longer valid!","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"We can still calculate a \"lower bound\" as the objective of the first-stage approximated subproblem, and this will converge to a finite value. However, we can't meaningfully interpret it as a bound with respect to the optimal policy. Therefore, it's best to just ignore the lower bound when training a risk-averse policy.","category":"page"},{"location":"explanation/risk/#Example:-risk-averse-hydro-thermal-scheduling","page":"Risk aversion","title":"Example: risk-averse hydro-thermal scheduling","text":"","category":"section"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Now it's time for an example. We create the same problem as Introductory theory:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"model = PolicyGraph(;\n graph = [Dict(2 => 1.0), Dict(3 => 1.0), Dict{Int,Float64}()],\n lower_bound = 0.0,\n optimizer = HiGHS.Optimizer,\n) do subproblem, t\n JuMP.set_silent(subproblem)\n JuMP.@variable(subproblem, volume_in == 200)\n JuMP.@variable(subproblem, 0 <= volume_out <= 200)\n states = Dict(:volume => State(volume_in, volume_out))\n JuMP.@variables(subproblem, begin\n thermal_generation >= 0\n hydro_generation >= 0\n hydro_spill >= 0\n inflow\n end)\n JuMP.@constraints(\n subproblem,\n begin\n volume_out == volume_in + inflow - hydro_generation - hydro_spill\n demand_constraint, thermal_generation + hydro_generation == 150.0\n end\n )\n fuel_cost = [50.0, 100.0, 150.0]\n JuMP.@objective(subproblem, Min, fuel_cost[t] * thermal_generation)\n uncertainty =\n Uncertainty([0.0, 50.0, 100.0], [1 / 3, 1 / 3, 1 / 3]) do ω\n return JuMP.fix(inflow, ω)\n end\n return states, uncertainty\nend","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Then we train a risk-averse policy, passing a risk measure to train:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"train(\n model;\n iteration_limit = 3,\n replications = 100,\n risk_measure = Entropic(1.0),\n)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"Finally, evaluate the decision rule:","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"evaluate_policy(\n model;\n node = 1,\n incoming_state = Dict(:volume => 150.0),\n random_variable = 75,\n)","category":"page"},{"location":"explanation/risk/","page":"Risk aversion","title":"Risk aversion","text":"info: Info\nFor this trivial example, the risk-averse policy isn't very different from the policy obtained using the expectation risk-measure. If you try it on some bigger/more interesting problems, you should see the expected cost increase, and the upper tail of the policy decrease.","category":"page"}] } diff --git a/previews/PR797/tutorial/SDDP.log b/previews/PR797/tutorial/SDDP.log index 9260a3fb6..375f03cc7 100644 --- a/previews/PR797/tutorial/SDDP.log +++ b/previews/PR797/tutorial/SDDP.log @@ -23,24 +23,24 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 -4.308942e+01 5.911572e+01 1.270255e+00 162 1 - 62 9.058538e+00 7.899015e+00 2.272231e+00 10044 1 - 110 8.905371e+00 7.895026e+00 3.276748e+00 17820 1 - 155 1.013127e+01 7.894128e+00 4.277412e+00 25110 1 - 196 8.491994e+00 7.892279e+00 5.283188e+00 31752 1 - 232 8.834522e+00 7.891669e+00 6.305662e+00 37584 1 - 268 9.792430e+00 7.888880e+00 7.322312e+00 43416 1 - 302 9.310072e+00 7.888246e+00 8.330216e+00 48924 1 - 334 9.923628e+00 7.888055e+00 9.351924e+00 54108 1 - 477 9.766161e+00 7.887904e+00 1.435876e+01 77274 1 - 604 8.483836e+00 7.887751e+00 1.938403e+01 97848 1 - 618 8.351073e+00 7.887745e+00 2.000197e+01 100116 1 + 1 -4.199992e+01 5.821554e+01 1.246103e+00 162 1 + 61 9.458406e+00 7.916707e+00 2.266685e+00 9882 1 + 108 9.785254e+00 7.910481e+00 3.277447e+00 17496 1 + 153 8.746097e+00 7.904751e+00 4.288313e+00 24786 1 + 195 9.180959e+00 7.904578e+00 5.297473e+00 31590 1 + 232 1.010440e+01 7.904209e+00 6.301664e+00 37584 1 + 267 7.456076e+00 7.903732e+00 7.327330e+00 43254 1 + 300 1.026767e+01 7.903401e+00 8.340721e+00 48600 1 + 333 1.213310e+01 7.903401e+00 9.345247e+00 53946 1 + 470 1.002919e+01 7.902829e+00 1.435850e+01 76140 1 + 575 7.612376e+00 7.902545e+00 1.955877e+01 93150 1 + 586 9.359577e+00 7.902545e+00 2.004409e+01 94932 1 ------------------------------------------------------------------- status : time_limit -total time (s) : 2.000197e+01 -total solves : 100116 -best bound : 7.887745e+00 -simulation ci : 8.871303e+00 ± 3.386698e-01 +total time (s) : 2.004409e+01 +total solves : 94932 +best bound : 7.902545e+00 +simulation ci : 8.754209e+00 ± 3.802230e-01 numeric issues : 0 ------------------------------------------------------------------- @@ -70,52 +70,52 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 7.348402e+02 6.433010e-03 103 1 - 2 5.488904e+02 5.957874e+02 2.317810e-02 406 1 - 3 5.957874e+02 5.619068e+02 2.799201e-02 509 1 - 4 5.059133e+02 5.562974e+02 3.268600e-02 612 1 - 5 6.162807e+02 5.540260e+02 3.730011e-02 715 1 - 6 6.049801e+02 5.537426e+02 4.193401e-02 818 1 - 7 6.089710e+02 5.536084e+02 4.673910e-02 921 1 - 8 6.073345e+02 5.536064e+02 5.142188e-02 1024 1 - 9 4.855049e+02 5.536055e+02 5.611801e-02 1127 1 - 10 4.679362e+02 5.536054e+02 6.076694e-02 1230 1 - 11 6.083526e+02 5.536054e+02 6.536603e-02 1333 1 - 12 6.083526e+02 5.536054e+02 7.023501e-02 1436 1 - 13 4.691372e+02 5.536054e+02 7.509899e-02 1539 1 - 14 5.653721e+02 5.536054e+02 7.994604e-02 1642 1 - 15 5.309983e+02 5.536054e+02 8.481002e-02 1745 1 - 16 4.580340e+02 5.536054e+02 8.964396e-02 1848 1 - 17 6.083526e+02 5.536054e+02 9.456801e-02 1951 1 - 18 5.932707e+02 5.536054e+02 9.952307e-02 2054 1 - 19 6.083526e+02 5.536054e+02 1.043780e-01 2157 1 - 20 4.604990e+02 5.536054e+02 1.092670e-01 2260 1 - 21 4.679086e+02 5.536054e+02 1.303971e-01 2563 1 - 22 5.653721e+02 5.536054e+02 1.353321e-01 2666 1 - 23 4.036846e+02 5.536054e+02 1.402020e-01 2769 1 - 24 5.971798e+02 5.536054e+02 1.451249e-01 2872 1 - 25 6.083526e+02 5.536054e+02 1.500471e-01 2975 1 - 26 6.083526e+02 5.536054e+02 1.550000e-01 3078 1 - 27 6.083526e+02 5.536054e+02 1.599169e-01 3181 1 - 28 5.215448e+02 5.536054e+02 1.648250e-01 3284 1 - 29 6.042291e+02 5.536054e+02 1.697831e-01 3387 1 - 30 4.627215e+02 5.536054e+02 1.747749e-01 3490 1 - 31 6.083526e+02 5.536054e+02 1.797011e-01 3593 1 - 32 6.083526e+02 5.536054e+02 1.846240e-01 3696 1 - 33 6.083526e+02 5.536054e+02 1.895890e-01 3799 1 - 34 5.722953e+02 5.536054e+02 1.945641e-01 3902 1 - 35 4.857658e+02 5.536054e+02 1.995211e-01 4005 1 - 36 6.083526e+02 5.536054e+02 2.044399e-01 4108 1 - 37 6.083526e+02 5.536054e+02 2.093799e-01 4211 1 - 38 4.537736e+02 5.536054e+02 2.142861e-01 4314 1 - 39 5.990232e+02 5.536054e+02 2.191470e-01 4417 1 - 40 6.083526e+02 5.536054e+02 2.240000e-01 4520 1 + 1 0.000000e+00 7.381026e+02 6.302834e-03 103 1 + 2 4.636886e+02 5.979152e+02 2.295399e-02 406 1 + 3 5.717418e+02 5.585856e+02 2.751398e-02 509 1 + 4 6.541002e+02 5.566193e+02 3.217483e-02 612 1 + 5 6.301200e+02 5.554489e+02 3.699088e-02 715 1 + 6 4.696176e+02 5.546821e+02 4.157782e-02 818 1 + 7 5.037949e+02 5.546752e+02 4.625177e-02 921 1 + 8 4.220417e+02 5.546684e+02 5.082083e-02 1024 1 + 9 5.869911e+02 5.546684e+02 5.536699e-02 1127 1 + 10 6.110812e+02 5.546684e+02 5.999398e-02 1230 1 + 11 5.069773e+02 5.546684e+02 6.511593e-02 1333 1 + 12 6.110812e+02 5.546684e+02 7.001781e-02 1436 1 + 13 6.110812e+02 5.546684e+02 7.483697e-02 1539 1 + 14 4.273511e+02 5.546684e+02 7.967782e-02 1642 1 + 15 5.555456e+02 5.546684e+02 8.452797e-02 1745 1 + 16 6.110812e+02 5.546684e+02 8.954000e-02 1848 1 + 17 6.037710e+02 5.546684e+02 9.435081e-02 1951 1 + 18 5.005857e+02 5.546684e+02 9.913182e-02 2054 1 + 19 4.872447e+02 5.546684e+02 1.039088e-01 2157 1 + 20 6.110812e+02 5.546684e+02 1.087809e-01 2260 1 + 21 6.110812e+02 5.546684e+02 1.276228e-01 2563 1 + 22 4.828795e+02 5.546684e+02 1.324539e-01 2666 1 + 23 6.110812e+02 5.546684e+02 1.373909e-01 2769 1 + 24 6.110812e+02 5.546684e+02 1.421468e-01 2872 1 + 25 5.271693e+02 5.546684e+02 1.468759e-01 2975 1 + 26 6.110812e+02 5.546684e+02 1.516318e-01 3078 1 + 27 3.831967e+02 5.546684e+02 2.804129e-01 3181 1 + 28 6.110812e+02 5.546684e+02 2.856400e-01 3284 1 + 29 4.179279e+02 5.546684e+02 2.908139e-01 3387 1 + 30 5.388067e+02 5.546684e+02 2.960649e-01 3490 1 + 31 3.831967e+02 5.546684e+02 3.013270e-01 3593 1 + 32 4.334630e+02 5.546684e+02 3.064640e-01 3696 1 + 33 4.958795e+02 5.546684e+02 3.117158e-01 3799 1 + 34 4.945721e+02 5.546684e+02 3.169348e-01 3902 1 + 35 4.501120e+02 5.546684e+02 3.220439e-01 4005 1 + 36 3.831967e+02 5.546684e+02 3.272078e-01 4108 1 + 37 5.869911e+02 5.546684e+02 3.323789e-01 4211 1 + 38 5.446092e+02 5.546684e+02 3.375349e-01 4314 1 + 39 4.893662e+02 5.546684e+02 3.426979e-01 4417 1 + 40 6.110812e+02 5.546684e+02 3.478260e-01 4520 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 2.240000e-01 +total time (s) : 3.478260e-01 total solves : 4520 -best bound : 5.536054e+02 -simulation ci : 5.440248e+02 ± 3.359064e+01 +best bound : 5.546684e+02 +simulation ci : 5.179208e+02 ± 3.581163e+01 numeric issues : 0 ------------------------------------------------------------------- @@ -145,11 +145,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.079600e+03 3.157700e+02 4.319191e-02 104 1 - 10 6.829100e+02 6.829100e+02 1.408200e-01 1040 1 + 1 1.079600e+03 3.157700e+02 4.396701e-02 104 1 + 10 6.829100e+02 6.829100e+02 1.422491e-01 1040 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 1.408200e-01 +total time (s) : 1.422491e-01 total solves : 1040 best bound : 6.829100e+02 simulation ci : 7.289889e+02 ± 7.726064e+01 @@ -181,16 +181,16 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 0.000000e+00 4.332209e-02 208 1 - 43 3.147876e+02 2.482960e+02 1.051473e+00 8944 1 - 81 2.500700e+02 2.633541e+02 2.073444e+00 16848 1 - 100 7.140000e+01 2.678968e+02 2.622479e+00 20800 1 + 1 0.000000e+00 0.000000e+00 4.392004e-02 208 1 + 47 3.068656e+02 2.506129e+02 1.065994e+00 9776 1 + 82 2.220767e+02 2.649358e+02 2.067326e+00 17056 1 + 100 4.106203e+02 2.693178e+02 2.582217e+00 20800 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 2.622479e+00 +total time (s) : 2.582217e+00 total solves : 20800 -best bound : 2.678968e+02 -simulation ci : 2.990844e+02 ± 4.412856e+01 +best bound : 2.693178e+02 +simulation ci : 2.763455e+02 ± 3.951780e+01 numeric issues : 0 ------------------------------------------------------------------- @@ -219,35 +219,36 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.077700e+04 1.672058e+04 1.148610e-01 1043 1 - 4 2.371448e+05 8.790687e+04 1.158474e+00 12492 1 - 9 3.554457e+05 9.269712e+04 3.090096e+00 32267 1 - 14 2.087393e+05 9.317022e+04 5.664502e+00 48506 1 - 15 4.645476e+05 9.323300e+04 8.595140e+00 68477 1 - 23 9.164029e+04 9.336212e+04 1.415758e+01 101781 1 - 31 8.922762e+04 9.336533e+04 1.948426e+01 129469 1 - 34 1.646510e+05 9.337466e+04 2.591307e+01 158806 1 - 42 7.816923e+04 9.337715e+04 3.137683e+01 181710 1 - 45 1.521806e+05 9.337999e+04 3.712482e+01 203767 1 - 48 3.739193e+05 9.338241e+04 4.592514e+01 234560 1 - 52 8.064625e+04 9.338499e+04 5.196300e+01 254540 1 - 56 2.230293e+05 9.338634e+04 5.722569e+01 270776 1 - 60 3.795747e+05 9.338706e+04 6.516237e+01 293668 1 - 63 3.962255e+05 9.338735e+04 7.354839e+01 317389 1 - 67 2.279541e+05 9.338929e+04 7.977527e+01 334665 1 - 75 8.334185e+04 9.339061e+04 8.558623e+01 349873 1 - 80 7.289020e+04 9.339111e+04 9.108056e+01 363616 1 - 86 1.592304e+05 9.339144e+04 9.888962e+01 382146 1 - 93 9.195856e+04 9.339201e+04 1.049164e+02 396103 1 - 95 1.690896e+05 9.339230e+04 1.100594e+02 407757 1 - 96 3.122782e+05 9.339230e+04 1.160097e+02 421072 1 - 100 4.335252e+03 9.339255e+04 1.225263e+02 435020 1 + 1 8.896140e+04 5.355877e+04 3.290091e-01 3747 1 + 4 5.114702e+04 8.712480e+04 1.375316e+00 14988 1 + 8 1.690164e+05 9.103990e+04 2.807647e+00 28728 1 + 12 1.891036e+05 9.258501e+04 4.405667e+00 42468 1 + 14 1.793613e+05 9.297886e+04 5.861146e+00 53290 1 + 15 2.932641e+05 9.328229e+04 7.627236e+00 65981 1 + 17 2.262467e+05 9.333438e+04 9.302794e+00 76595 1 + 24 8.157553e+04 9.335834e+04 1.438460e+01 105736 1 + 32 4.245414e+04 9.336878e+04 1.961926e+01 131968 1 + 41 3.636223e+05 9.337191e+04 2.493370e+01 155499 1 + 47 6.304775e+04 9.337416e+04 3.043700e+01 177565 1 + 49 3.958071e+05 9.337891e+04 3.554630e+01 196291 1 + 53 2.541544e+05 9.337995e+04 4.204532e+01 219183 1 + 60 1.272915e+05 9.338230e+04 4.841793e+01 240420 1 + 65 3.349013e+05 9.338373e+04 5.545933e+01 262067 1 + 73 1.032922e+05 9.338608e+04 6.069484e+01 277483 1 + 75 4.508955e+05 9.338630e+04 6.954851e+01 302033 1 + 78 1.306855e+05 9.338652e+04 7.492027e+01 316394 1 + 83 1.617044e+05 9.338707e+04 8.168241e+01 333881 1 + 89 3.304372e+05 9.338844e+04 9.095024e+01 356571 1 + 92 2.115612e+05 9.338923e+04 9.950628e+01 376548 1 + 96 1.860008e+05 9.339074e+04 1.081299e+02 394448 1 + 99 3.774533e+04 9.339173e+04 1.133516e+02 405897 1 + 100 2.990300e+04 9.339187e+04 1.140218e+02 407356 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 1.225263e+02 -total solves : 435020 -best bound : 9.339255e+04 -simulation ci : 9.780506e+04 ± 2.038111e+04 +total time (s) : 1.140218e+02 +total solves : 407356 +best bound : 9.339187e+04 +simulation ci : 9.160261e+04 ± 1.843432e+04 numeric issues : 0 ------------------------------------------------------------------- @@ -276,14 +277,107 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.250000e+04 2.500000e+03 3.855944e-03 12 1 - 10 7.500000e+03 8.333333e+03 8.556104e-02 120 1 + 1 3.750000e+04 2.500000e+03 3.497124e-03 12 1 + 10 1.250000e+04 8.333333e+03 1.358390e-02 120 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 8.556104e-02 +total time (s) : 1.358390e-02 total solves : 120 best bound : 8.333333e+03 -simulation ci : 9.125000e+03 ± 2.478419e+03 +simulation ci : 1.187500e+04 ± 6.125000e+03 +numeric issues : 0 +------------------------------------------------------------------- + +------------------------------------------------------------------- + SDDP.jl (c) Oscar Dowson and contributors, 2017-24 +------------------------------------------------------------------- +problem + nodes : 11 + state variables : 3 + scenarios : 1.02400e+13 + existing cuts : false +options + solver : serial mode + risk measure : SDDP.Expectation() + sampling scheme : SDDP.InSampleMonteCarlo +subproblem structure + VariableRef : [9, 9] + AffExpr in MOI.EqualTo{Float64} : [2, 2] + VariableRef in MOI.EqualTo{Float64} : [1, 2] + VariableRef in MOI.GreaterThan{Float64} : [4, 5] + VariableRef in MOI.LessThan{Float64} : [1, 1] +numerical stability report + matrix range [1e+00, 1e+00] + objective range [1e+00, 4e+01] + bounds range [0e+00, 0e+00] + rhs range [0e+00, 0e+00] +------------------------------------------------------------------- + iteration simulation bound time (s) solves pid +------------------------------------------------------------------- + 1 3.886158e+05 4.573582e+04 1.881695e-02 212 1 + 55 1.440289e+05 1.443366e+05 1.024649e+00 14960 1 + 110 1.435658e+05 1.443373e+05 2.026297e+00 28820 1 + 166 1.592711e+05 1.443373e+05 3.031513e+00 40692 1 + 219 1.226816e+05 1.443373e+05 4.047272e+00 53028 1 + 268 1.446184e+05 1.443373e+05 5.052959e+00 63416 1 + 286 1.260500e+05 1.443373e+05 5.428404e+00 67232 1 +------------------------------------------------------------------- +status : simulation_stopping +total time (s) : 5.428404e+00 +total solves : 67232 +best bound : 1.443373e+05 +simulation ci : 1.446033e+05 ± 3.621723e+03 +numeric issues : 0 +------------------------------------------------------------------- + +------------------------------------------------------------------- + SDDP.jl (c) Oscar Dowson and contributors, 2017-24 +------------------------------------------------------------------- +problem + nodes : 2 + state variables : 3 + scenarios : Inf + existing cuts : false +options + solver : serial mode + risk measure : SDDP.Expectation() + sampling scheme : SDDP.InSampleMonteCarlo +subproblem structure + VariableRef : [9, 9] + AffExpr in MOI.EqualTo{Float64} : [2, 2] + VariableRef in MOI.EqualTo{Float64} : [1, 2] + VariableRef in MOI.GreaterThan{Float64} : [4, 5] +numerical stability report + matrix range [1e+00, 1e+00] + objective range [1e+00, 4e+01] + bounds range [0e+00, 0e+00] + rhs range [0e+00, 0e+00] +------------------------------------------------------------------- + iteration simulation bound time (s) solves pid +------------------------------------------------------------------- + 1 1.976053e+04 3.345593e+04 6.960154e-03 85 1 + 27 4.079662e+05 2.999320e+05 1.010339e+00 13110 1 + 62 3.998361e+05 3.124508e+05 2.044865e+00 25304 1 + 83 4.808036e+05 3.126376e+05 3.073261e+00 35825 1 + 105 8.732187e+05 3.126616e+05 4.128147e+00 45528 1 + 121 7.242058e+05 3.126642e+05 5.169122e+00 53671 1 + 145 2.721555e+05 3.126649e+05 6.175422e+00 61612 1 + 167 6.178394e+05 3.126650e+05 7.271336e+00 69698 1 + 178 6.524500e+05 3.126650e+05 8.415252e+00 77437 1 + 198 7.501342e+05 3.126650e+05 9.566864e+00 84702 1 + 257 7.746053e+04 3.126650e+05 1.457950e+01 109961 1 + 311 9.295026e+05 3.126650e+05 1.960357e+01 127340 1 + 334 4.816711e+05 3.126650e+05 2.482720e+01 141328 1 + 356 6.182605e+05 3.126650e+05 3.021558e+01 151472 1 + 374 7.947658e+05 3.126650e+05 3.547974e+01 159848 1 + 396 2.336711e+05 3.126650e+05 4.054506e+01 166968 1 + 400 3.821342e+05 3.126650e+05 4.230857e+01 169114 1 +------------------------------------------------------------------- +status : iteration_limit +total time (s) : 4.230857e+01 +total solves : 169114 +best bound : 3.126650e+05 +simulation ci : 3.018209e+05 ± 2.740583e+04 numeric issues : 0 ------------------------------------------------------------------- @@ -312,14 +406,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.875000e+04 1.991887e+03 5.798817e-03 18 1 - 40 5.000000e+03 8.072917e+03 1.367278e-01 1320 1 + 1 9.375000e+03 1.991887e+03 5.150795e-03 18 1 + 40 1.875000e+03 8.072917e+03 1.437938e-01 1320 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.367278e-01 +total time (s) : 1.437938e-01 total solves : 1320 best bound : 8.072917e+03 -simulation ci : 8.463149e+03 ± 2.413376e+03 +simulation ci : 5.917822e+03 ± 1.372472e+03 numeric issues : 0 ------------------------------------------------------------------- @@ -350,11 +444,11 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.499895e+01 1.562631e+00 1.620889e-02 6 1 - 40 8.333333e+00 8.333333e+00 6.741519e-01 246 1 + 1 2.499895e+01 1.562631e+00 1.634693e-02 6 1 + 40 8.333333e+00 8.333333e+00 7.224190e-01 246 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.741519e-01 +total time (s) : 7.224190e-01 total solves : 246 best bound : 8.333333e+00 simulation ci : 8.810723e+00 ± 8.167195e-01 @@ -387,14 +481,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 1.000000e+01 4.232883e-03 11 1 - 41 7.000000e+00 6.561000e+00 6.972868e-01 2875 1 + 1 0.000000e+00 7.217100e+00 5.120039e-03 13 1 + 40 2.500000e+01 6.561000e+00 8.739021e-01 3144 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.972868e-01 -total solves : 2875 +total time (s) : 8.739021e-01 +total solves : 3144 best bound : 6.561000e+00 -simulation ci : 6.195122e+00 ± 2.675728e+00 +simulation ci : 8.075000e+00 ± 2.944509e+00 numeric issues : 0 ------------------------------------------------------------------- @@ -419,15 +513,14 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 7.672500e+03 3.738585e+03 2.257490e-02 39 1 - 198 5.671875e+03 5.092593e+03 1.022979e+00 9522 1 - 300 2.475000e+03 5.092593e+03 1.449091e+00 13800 1 + 1 6.806250e+03 4.408308e+03 2.262402e-02 39 1 + 182 7.218750e+03 5.092593e+03 7.837250e-01 8598 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.449091e+00 -total solves : 13800 +total time (s) : 7.837250e-01 +total solves : 8598 best bound : 5.092593e+03 -simulation ci : 4.966233e+03 ± 4.164351e+02 +simulation ci : 4.992895e+03 ± 5.635857e+02 numeric issues : 0 ------------------------------------------------------------------- @@ -452,15 +545,15 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 7.437500e+03 3.434307e+03 2.600098e-02 39 1 - 202 5.937500e+03 5.135984e+03 1.028277e+00 9978 1 - 300 1.187500e+04 5.135984e+03 1.525532e+00 13800 1 + 1 7.250000e+03 3.529412e+03 2.422404e-02 39 1 + 211 5.687500e+03 5.135984e+03 1.027903e+00 10029 1 + 290 1.150000e+04 5.135984e+03 1.379138e+00 13110 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 1.525532e+00 -total solves : 13800 +total time (s) : 1.379138e+00 +total solves : 13110 best bound : 5.135984e+03 -simulation ci : 5.021167e+03 ± 4.471027e+02 +simulation ci : 5.362165e+03 ± 4.590779e+02 numeric issues : 0 ------------------------------------------------------------------- @@ -489,14 +582,14 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.500000e+04 3.958333e+03 3.963947e-03 12 1 - 40 1.875000e+03 1.062500e+04 7.159686e-02 642 1 + 1 3.750000e+04 3.958333e+03 3.509998e-03 12 1 + 60 1.125000e+04 1.062500e+04 1.048701e-01 963 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 7.159686e-02 -total solves : 642 +total time (s) : 1.048701e-01 +total solves : 963 best bound : 1.062500e+04 -simulation ci : 1.044969e+04 ± 2.365515e+03 +simulation ci : 1.142388e+04 ± 2.185147e+03 numeric issues : 0 ------------------------------------------------------------------- @@ -526,16 +619,16 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.369329e+06 8.292865e+04 1.862850e-01 47 1 - 3 1.048214e+06 3.234043e+05 1.517102e+00 253 1 - 8 1.747989e+04 3.756159e+05 2.520988e+00 432 1 - 10 2.028528e+05 3.857742e+05 2.873698e+00 486 1 + 1 1.458563e+07 3.264622e+04 1.079772e+00 271 1 + 3 1.883248e+06 9.059830e+04 2.227736e+00 461 1 + 8 4.978020e+05 3.616759e+05 3.418261e+00 636 1 + 10 4.551755e+06 3.694773e+05 5.206153e+00 930 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 2.873698e+00 -total solves : 486 -best bound : 3.857742e+05 -simulation ci : 5.430437e+05 ± 4.659585e+05 +total time (s) : 5.206153e+00 +total solves : 930 +best bound : 3.694773e+05 +simulation ci : 2.225401e+06 ± 2.832083e+06 numeric issues : 0 ------------------------------------------------------------------- @@ -567,17 +660,17 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 3.096303e+06 3.310888e+04 3.757112e-01 47 1 - 3 1.776906e+06 8.235383e+04 2.318987e+00 141 1 - 6 1.718189e+06 2.384704e+05 3.967327e+00 266 1 - 9 4.072341e+05 3.526150e+05 5.440380e+00 363 1 - 10 1.864000e+05 3.537789e+05 5.916173e+00 402 1 + 1 9.332938e+06 8.143094e+04 1.612005e+00 179 1 + 3 3.955533e+05 2.775267e+05 2.860391e+00 273 1 + 6 7.123937e+05 3.339911e+05 5.947709e+00 458 1 + 9 4.448369e+05 3.623498e+05 7.490460e+00 563 1 + 10 1.927026e+05 3.892585e+05 8.088215e+00 598 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 5.916173e+00 -total solves : 402 -best bound : 3.537789e+05 -simulation ci : 7.476337e+05 ± 6.640639e+05 +total time (s) : 8.088215e+00 +total solves : 598 +best bound : 3.892585e+05 +simulation ci : 1.188252e+06 ± 1.778773e+06 numeric issues : 0 ------------------------------------------------------------------- @@ -607,17 +700,17 @@ numerical stability report ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.863871e+06 5.013490e+04 2.625811e-01 45 1 - 4 6.673999e+05 2.980236e+05 2.195032e+00 240 1 - 7 3.701837e+05 3.704299e+05 3.682911e+00 399 1 - 9 4.233725e+05 4.060326e+05 4.949868e+00 525 1 - 10 1.905239e+05 4.136109e+05 5.250140e+00 555 1 + 1 1.616405e+06 6.633473e+04 2.201200e-01 30 1 + 3 7.336368e+05 2.131314e+05 1.479425e+00 141 1 + 7 1.613523e+06 3.688984e+05 4.756617e+00 387 1 + 8 1.001434e+07 3.810947e+05 7.486794e+00 564 1 + 10 1.367906e+06 3.877966e+05 1.048707e+01 783 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 5.250140e+00 -total solves : 555 -best bound : 4.136109e+05 -simulation ci : 8.251294e+05 ± 5.685420e+05 +total time (s) : 1.048707e+01 +total solves : 783 +best bound : 3.877966e+05 +simulation ci : 1.638935e+06 ± 1.863579e+06 numeric issues : 0 ------------------------------------------------------------------- @@ -641,14 +734,14 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.812500e+04 1.991887e+03 1.471901e-02 18 1 - 20 1.875000e+03 8.072917e+03 4.870701e-02 360 1 + 1 2.812500e+04 1.991887e+03 1.481318e-02 18 1 + 20 1.125000e+04 8.072917e+03 5.060315e-02 360 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 4.870701e-02 +total time (s) : 5.060315e-02 total solves : 360 best bound : 8.072917e+03 -simulation ci : 8.800475e+03 ± 2.725185e+03 +simulation ci : 1.082898e+04 ± 2.947323e+03 numeric issues : 0 ------------------------------------------------------------------- @@ -672,11 +765,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 6.500000e+00 3.000000e+00 3.302097e-03 6 1 - 5 3.500000e+00 3.500000e+00 6.214142e-03 30 1 + 1 6.500000e+00 3.000000e+00 3.134012e-03 6 1 + 5 3.500000e+00 3.500000e+00 5.883932e-03 30 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 6.214142e-03 +total time (s) : 5.883932e-03 total solves : 30 best bound : 3.500000e+00 simulation ci : 4.100000e+00 ± 1.176000e+00 @@ -703,11 +796,11 @@ subproblem structure ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 6.500000e+00 1.100000e+01 3.272057e-03 6 1 - 5 5.500000e+00 1.100000e+01 5.818129e-03 30 1 + 1 6.500000e+00 1.100000e+01 2.928972e-03 6 1 + 5 5.500000e+00 1.100000e+01 5.285025e-03 30 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 5.818129e-03 +total time (s) : 5.285025e-03 total solves : 30 best bound : 1.100000e+01 simulation ci : 5.700000e+00 ± 3.920000e-01 diff --git a/previews/PR797/tutorial/arma/index.html b/previews/PR797/tutorial/arma/index.html index a731cf6b4..fca9c8ee8 100644 --- a/previews/PR797/tutorial/arma/index.html +++ b/previews/PR797/tutorial/arma/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Auto-regressive stochastic processes

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

SDDP.jl assumes that the random variable in each node is independent of the random variables in all other nodes. However, a common request is to model the random variables by some auto-regressive process.

There are two ways to do this:

  1. model the random variable as a Markov chain
  2. use the "state-space expansion" trick
Info

This tutorial is in the context of a hydro-thermal scheduling example, but it should be apparent how the ideas transfer to other applications.

using SDDP
+

Auto-regressive stochastic processes

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

SDDP.jl assumes that the random variable in each node is independent of the random variables in all other nodes. However, a common request is to model the random variables by some auto-regressive process.

There are two ways to do this:

  1. model the random variable as a Markov chain
  2. use the "state-space expansion" trick
Info

This tutorial is in the context of a hydro-thermal scheduling example, but it should be apparent how the ideas transfer to other applications.

using SDDP
 import HiGHS

The state-space expansion trick

In An introduction to SDDP.jl, we assumed that the inflows were stagewise-independent. However, in many cases this is not correct, and inflow models are more accurately described by an auto-regressive process such as:

\[inflow_{t} = inflow_{t-1} + \varepsilon\]

Here $\varepsilon$ is a random variable, and the inflow in stage $t$ is the inflow in stage $t-1$ plus $\varepsilon$ (which might be negative).

For simplicity, we omit any coefficients and other terms, but this could easily be extended to a model like

\[inflow_{t} = a \times inflow_{t-1} + b + \varepsilon\]

In practice, you can estimate a distribution for $\varepsilon$ by fitting the chosen statistical model to historical data, and then using the empirical residuals.

To implement the auto-regressive model in SDDP.jl, we introduce inflow as a state variable.

Tip

Our rule of thumb for "when is something a state variable?" is: if you need the value of a variable from a previous stage to compute something in stage $t$, then that variable is a state variable.

model = SDDP.LinearPolicyGraph(;
     stages = 3,
     sense = :Min,
@@ -44,37 +44,36 @@
     end
     return inflow
 end
simulator (generic function with 1 method)

When called with no arguments, it produces a vector of inflows:

simulator()
3-element Vector{Float64}:
- 59.6
- 59.7
- 69.3
Warning

The simulator must return a Vector{Float64}, so it is limited to a uni-variate random variable. It is possible to do something similar for multi-variate random variable, but you'll have to manually construct the Markov transition matrix, and solution times scale poorly, even in the two-dimensional case.

The next step is to call SDDP.MarkovianGraph with our simulator. This function will attempt to fit a Markov chain to the stochastic process produced by your simulator. There are two key arguments:

  • budget is the total number of nodes we want in the Markov chain
  • scenarios is a limit on the number of times we can call simulator
graph = SDDP.MarkovianGraph(simulator; budget = 8, scenarios = 30)
Root
+ 40.0
+ 30.0
+ 30.1
Warning

The simulator must return a Vector{Float64}, so it is limited to a uni-variate random variable. It is possible to do something similar for multi-variate random variable, but you'll have to manually construct the Markov transition matrix, and solution times scale poorly, even in the two-dimensional case.

The next step is to call SDDP.MarkovianGraph with our simulator. This function will attempt to fit a Markov chain to the stochastic process produced by your simulator. There are two key arguments:

  • budget is the total number of nodes we want in the Markov chain
  • scenarios is a limit on the number of times we can call simulator
graph = SDDP.MarkovianGraph(simulator; budget = 8, scenarios = 30)
Root
  (0, 0.0)
 Nodes
- (1, 52.00610486599696)
- (1, 59.6)
- (2, 45.685040848203066)
- (2, 59.7)
+ (1, 46.92132893571418)
+ (2, 46.32243499322815)
+ (2, 68.37159320526222)
  (2, 69.2)
- (3, 42.694370791645426)
- (3, 50.77596542536898)
- (3, 75.84301111022079)
+ (3, 44.00132036450157)
+ (3, 59.202641512018864)
+ (3, 60.62874467567708)
+ (3, 78.8)
 Arcs
- (0, 0.0) => (1, 52.00610486599696) w.p. 0.7666666666666667
- (0, 0.0) => (1, 59.6) w.p. 0.23333333333333334
- (1, 52.00610486599696) => (2, 59.7) w.p. 0.13043478260869565
- (1, 52.00610486599696) => (2, 45.685040848203066) w.p. 0.8695652173913043
- (1, 52.00610486599696) => (2, 69.2) w.p. 0.0
- (1, 59.6) => (2, 59.7) w.p. 0.2857142857142857
- (1, 59.6) => (2, 45.685040848203066) w.p. 0.2857142857142857
- (1, 59.6) => (2, 69.2) w.p. 0.42857142857142855
- (2, 45.685040848203066) => (3, 50.77596542536898) w.p. 0.36363636363636365
- (2, 45.685040848203066) => (3, 42.694370791645426) w.p. 0.6363636363636364
- (2, 45.685040848203066) => (3, 75.84301111022079) w.p. 0.0
- (2, 59.7) => (3, 50.77596542536898) w.p. 0.8
- (2, 59.7) => (3, 42.694370791645426) w.p. 0.0
- (2, 59.7) => (3, 75.84301111022079) w.p. 0.2
- (2, 69.2) => (3, 50.77596542536898) w.p. 0.3333333333333333
- (2, 69.2) => (3, 42.694370791645426) w.p. 0.0
- (2, 69.2) => (3, 75.84301111022079) w.p. 0.6666666666666666

Here we can see we have created a MarkovianGraph with nodes like (2, 59.7). The first element of each node is the stage, and the second element is the inflow.

Create a SDDP.PolicyGraph using graph as follows:

model = SDDP.PolicyGraph(
+ (0, 0.0) => (1, 46.92132893571418) w.p. 1.0
+ (1, 46.92132893571418) => (2, 68.37159320526222) w.p. 0.23333333333333334
+ (1, 46.92132893571418) => (2, 46.32243499322815) w.p. 0.6333333333333333
+ (1, 46.92132893571418) => (2, 69.2) w.p. 0.13333333333333333
+ (2, 46.32243499322815) => (3, 59.202641512018864) w.p. 0.21052631578947367
+ (2, 46.32243499322815) => (3, 44.00132036450157) w.p. 0.7368421052631579
+ (2, 46.32243499322815) => (3, 60.62874467567708) w.p. 0.05263157894736842
+ (2, 46.32243499322815) => (3, 78.8) w.p. 0.0
+ (2, 68.37159320526222) => (3, 59.202641512018864) w.p. 0.42857142857142855
+ (2, 68.37159320526222) => (3, 44.00132036450157) w.p. 0.14285714285714285
+ (2, 68.37159320526222) => (3, 60.62874467567708) w.p. 0.2857142857142857
+ (2, 68.37159320526222) => (3, 78.8) w.p. 0.14285714285714285
+ (2, 69.2) => (3, 59.202641512018864) w.p. 0.0
+ (2, 69.2) => (3, 44.00132036450157) w.p. 0.0
+ (2, 69.2) => (3, 60.62874467567708) w.p. 0.5
+ (2, 69.2) => (3, 78.8) w.p. 0.5

Here we can see we have created a MarkovianGraph with nodes like (2, 59.7). The first element of each node is the stage, and the second element is the inflow.

Create a SDDP.PolicyGraph using graph as follows:

model = SDDP.PolicyGraph(
     graph;  # <--- New stuff
     sense = :Min,
     lower_bound = 0.0,
@@ -91,7 +90,7 @@
     # The new water balance constraint using the node:
     @constraint(sp, x.out == x.in - g_h - s + inflow)
 end
A policy graph with 8 nodes.
- Node indices: (1, 52.00610486599696), (1, 59.6), (2, 45.685040848203066), (2, 59.7), (2, 69.2), (3, 42.694370791645426), (3, 50.77596542536898), (3, 75.84301111022079)
+ Node indices: (1, 46.92132893571418), (2, 46.32243499322815), (2, 68.37159320526222), (2, 69.2), (3, 44.00132036450157), (3, 59.202641512018864), (3, 60.62874467567708), (3, 78.8)
 

When can this trick be used?

The Markov chain approach should be used when:

  • The random variable is uni-variate
  • The random variable appears in the objective function or as a variable coefficient in the constraint matrix
  • It's non-trivial to write the stochastic process as a series of constraints (for example, it uses nonlinear terms)
  • The number of nodes is modest (for example, a budget of hundreds, up to perhaps 1000)

Vector auto-regressive models

The state-space expansion section assumed that the random variable was uni-variate. However, the approach naturally extends to vector auto-regressive models. For example, if inflow is a 2-dimensional vector, then we can model a vector auto-regressive model to it as follows:

\[inflow_{t} = A \times inflow_{t-1} + b + \varepsilon\]

Here A is a 2-by-2 matrix, and b and $\varepsilon$ are 2-by-1 vectors.

model = SDDP.LinearPolicyGraph(;
     stages = 3,
     sense = :Min,
@@ -131,4 +130,4 @@
     end
 end
A policy graph with 3 nodes.
  Node indices: 1, 2, 3
-
+
diff --git a/previews/PR797/tutorial/convex.cuts.json b/previews/PR797/tutorial/convex.cuts.json index 7b1a9ecb4..d15a02b97 100644 --- a/previews/PR797/tutorial/convex.cuts.json +++ b/previews/PR797/tutorial/convex.cuts.json @@ -1 +1 @@ -[{"risk_set_cuts":[],"node":"1","single_cuts":[{"state":{"x":0.0},"intercept":243326.5932873207,"coefficients":{"x":-317616.6666712048}},{"state":{"x":0.0},"intercept":321380.875475521,"coefficients":{"x":-318249.99997748446}},{"state":{"x":0.0},"intercept":346483.98068044573,"coefficients":{"x":-318250.0000028512}},{"state":{"x":0.0},"intercept":354558.1883998782,"coefficients":{"x":-318250.0000165849}},{"state":{"x":0.0},"intercept":357155.19103196025,"coefficients":{"x":-318250.00002266077}},{"state":{"x":0.0},"intercept":357990.49560985994,"coefficients":{"x":-318250.0000249221}},{"state":{"x":0.6747303253232005},"intercept":143526.23843431904,"coefficients":{"x":-318249.999593561}},{"state":{"x":5.339784227252946},"intercept":16891.51547643259,"coefficients":{"x":-633.3333230400074}},{"state":{"x":5.004838142560195},"intercept":34570.755454700105,"coefficients":{"x":-1034.4444389656212}},{"state":{"x":7.669892057195955},"intercept":49120.53381532711,"coefficients":{"x":-982.7222286996239}},{"state":{"x":10.334945989032521},"intercept":62959.20205548993,"coefficients":{"x":-933.5861238177274}},{"state":{"x":1.570106700983442},"intercept":91939.23532134033,"coefficients":{"x":-102024.80136239847}},{"state":{"x":1.2351583459434825},"intercept":129789.86068842167,"coefficients":{"x":-33891.18724449182}},{"state":{"x":0.9002099929562093},"intercept":170134.70303683338,"coefficients":{"x":-112144.70920724627}},{"state":{"x":0.20021007347356834},"intercept":356694.2697306806,"coefficients":{"x":-328348.87597475364}},{"state":{"x":0.9002099847533239},"intercept":189816.15837894194,"coefficients":{"x":-115342.6867524897}},{"state":{"x":0.9002099848224397},"intercept":189816.15837087654,"coefficients":{"x":-115342.6867514185}},{"state":{"x":2.6747417859516904},"intercept":99343.33023926031,"coefficients":{"x":-11977.846712101324}},{"state":{"x":5.339793429002897},"intercept":88381.99569854088,"coefficients":{"x":-5038.620421847489}},{"state":{"x":8.004845071660784},"intercept":89427.35239806405,"coefficients":{"x":-4786.689415135443}},{"state":{"x":7.66989671458264},"intercept":105091.25774734185,"coefficients":{"x":-4547.354956267764}},{"state":{"x":10.334948357223443},"intercept":107307.2170775501,"coefficients":{"x":-4319.987217105521}},{"state":{"x":34.47465546135932},"intercept":24471.351408416187,"coefficients":{"x":-2951.3292641509734}},{"state":{"x":35.174655414329536},"intercept":56454.64362334776,"coefficients":{"x":-886.9068228916495}},{"state":{"x":34.8096015556942},"intercept":72510.00024034713,"coefficients":{"x":-842.5614863251166}},{"state":{"x":35.50960150787011},"intercept":86796.47456655078,"coefficients":{"x":-800.4334161041636}},{"state":{"x":35.174655407494726},"intercept":101090.30622409767,"coefficients":{"x":-760.4117492544106}},{"state":{"x":34.20000158406647},"intercept":115030.17736874847,"coefficients":{"x":-722.3911657499657}},{"state":{"x":29.900001537210848},"intercept":130435.7209677729,"coefficients":{"x":-686.2716119730778}},{"state":{"x":28.600001490606026},"intercept":142887.55532123998,"coefficients":{"x":-651.9580351036365}},{"state":{"x":27.30000144401129},"intercept":154598.4115769979,"coefficients":{"x":-619.3601372821113}},{"state":{"x":28.00000139742461},"intercept":164434.47369350903,"coefficients":{"x":-588.3921342389939}},{"state":{"x":26.700001350833606},"intercept":174848.67325332266,"coefficients":{"x":-558.9725315513522}},{"state":{"x":25.400001302841833},"intercept":184640.6613115414,"coefficients":{"x":-531.0239092254958}},{"state":{"x":24.100001256290092},"intercept":193846.62351834806,"coefficients":{"x":-504.47271715032537}},{"state":{"x":24.800001209714637},"intercept":201542.18432444573,"coefficients":{"x":-479.24908456472804}},{"state":{"x":25.500001163165727},"intercept":208813.86711724673,"coefficients":{"x":-455.2866335006882}},{"state":{"x":26.200001116613503},"intercept":215684.82080282416,"coefficients":{"x":-432.52230388256964}},{"state":{"x":21.900001070057726},"intercept":224231.42002148923,"coefficients":{"x":-410.896191197144}},{"state":{"x":22.600001023492933},"intercept":230262.68500122358,"coefficients":{"x":-390.35138405631676}},{"state":{"x":21.300000976925208},"intercept":236702.20719397528,"coefficients":{"x":-370.83381744255064}},{"state":{"x":20.00000093036217},"intercept":242752.4150819843,"coefficients":{"x":-352.29212934282526}},{"state":{"x":18.70000088380225},"intercept":248436.1413010264,"coefficients":{"x":-334.677525873133}},{"state":{"x":19.400000837279347},"intercept":253139.02119042818,"coefficients":{"x":-317.94365243910653}},{"state":{"x":20.100000790721516},"intercept":257580.8173681741,"coefficients":{"x":-302.0464725779896}},{"state":{"x":18.800000744160755},"intercept":262349.76930136036,"coefficients":{"x":-286.9441518892128}},{"state":{"x":19.500000697637425},"intercept":266282.97474606545,"coefficients":{"x":-272.5969471529904}},{"state":{"x":20.200000651079854},"intercept":269997.27985184634,"coefficients":{"x":-258.96710253501794}},{"state":{"x":20.90000060453167},"intercept":273504.7416387844,"coefficients":{"x":-246.01875003882267}},{"state":{"x":16.60000055796913},"intercept":277985.3477386157,"coefficients":{"x":-233.71781598952083}},{"state":{"x":12.300000511421253},"intercept":282164.42602067406,"coefficients":{"x":-222.03193025338655}},{"state":{"x":13.000000464886412},"intercept":285006.27609276684,"coefficients":{"x":-210.93033843619628}},{"state":{"x":11.700000418345788},"intercept":288089.5923635106,"coefficients":{"x":-200.3838266809543}},{"state":{"x":12.40000037186862},"intercept":290601.62668634515,"coefficients":{"x":-190.36464031404392}},{"state":{"x":11.100000325333037},"intercept":293334.22105039685,"coefficients":{"x":-180.84641410485258}},{"state":{"x":11.800000278853842},"intercept":295553.7383689153,"coefficients":{"x":-171.8040987270412}},{"state":{"x":10.500000232365807},"intercept":297974.6908224687,"coefficients":{"x":-163.21390001931235}},{"state":{"x":11.200000185894515},"intercept":299934.851921036,"coefficients":{"x":-155.05321068484344}},{"state":{"x":9.900000139415198},"intercept":302078.95589300105,"coefficients":{"x":-147.30055682649578}},{"state":{"x":10.600000092952685},"intercept":303809.23594209424,"coefficients":{"x":-139.9355350259734}},{"state":{"x":11.30000004647961},"intercept":305441.5852232839,"coefficients":{"x":-132.93876381934456}},{"state":{"x":0.35186966353233856},"intercept":460025.66766550404,"coefficients":{"x":-317658.76393406146}},{"state":{"x":1.0217658604366708},"intercept":364490.19262665487,"coefficients":{"x":-101584.03923634284}},{"state":{"x":0.6868176841627447},"intercept":402386.6523012378,"coefficients":{"x":-318249.99939578184}},{"state":{"x":0.35186966490372684},"intercept":524553.1655462777,"coefficients":{"x":-318250.0000419531}},{"state":{"x":0.35186966474399967},"intercept":529483.446003206,"coefficients":{"x":-318250.0000419452}},{"state":{"x":0.6868176827913564},"intercept":424447.49471779657,"coefficients":{"x":-318249.9993957485}},{"state":{"x":0.3518696635323385},"intercept":531539.0992763988,"coefficients":{"x":-318250.00004194304}},{"state":{"x":1.0217658604707087},"intercept":387966.4424886175,"coefficients":{"x":-102362.50001194184}},{"state":{"x":0.6868176841967829},"intercept":425436.2263267711,"coefficients":{"x":-318249.99943258625}},{"state":{"x":0.35186966493776495},"intercept":537343.3006470341,"coefficients":{"x":-349714.7916216376}},{"state":{"x":0.0},"intercept":665584.762183403,"coefficients":{"x":-349714.7916952931}},{"state":{"x":0.3518696609184766},"intercept":544173.4247651,"coefficients":{"x":-349714.7916300421}},{"state":{"x":0.0},"intercept":667747.6343766664,"coefficients":{"x":-349714.7916826482}},{"state":{"x":0.3518696649037268},"intercept":544858.3328991635,"coefficients":{"x":-349714.79162170226}},{"state":{"x":0.3518696694559398},"intercept":544910.494243583,"coefficients":{"x":-349714.79162170814}},{"state":{"x":2.0048446931150767},"intercept":329383.122164487,"coefficients":{"x":-33406.88896682212}},{"state":{"x":4.669896388081862},"intercept":310123.7413424814,"coefficients":{"x":-1625.4305981526159}},{"state":{"x":7.334948106007296},"intercept":310923.02071326174,"coefficients":{"x":-1071.53673719523}},{"state":{"x":2.399109772538853},"intercept":329352.85103239346,"coefficients":{"x":-11868.168193400455}},{"state":{"x":2.399109781270263},"intercept":330355.50061758916,"coefficients":{"x":-12162.181496951294}},{"state":{"x":1.6991098584185145},"intercept":349527.07514685113,"coefficients":{"x":-36899.48248602479}},{"state":{"x":2.3991097725388544},"intercept":333552.694576604,"coefficients":{"x":-13268.169566331755}},{"state":{"x":2.399109769024788},"intercept":334081.15711562836,"coefficients":{"x":-16519.756405626336}},{"state":{"x":3.069006500493601},"intercept":327890.82917926577,"coefficients":{"x":-6814.589494311517}},{"state":{"x":2.7340581355718228},"intercept":332948.9576356814,"coefficients":{"x":-8339.209550107635}},{"state":{"x":2.3991097725388517},"intercept":337384.50498884014,"coefficients":{"x":-15275.58572770669}},{"state":{"x":2.3991097861608424},"intercept":337384.5047804637,"coefficients":{"x":-15275.585648962704}},{"state":{"x":5.669896735564635},"intercept":319054.12467046554,"coefficients":{"x":-3447.27330115479}},{"state":{"x":10.334948368063337},"intercept":312509.1341127749,"coefficients":{"x":-1922.6532884228084}},{"state":{"x":5.334948505150162},"intercept":327296.6147779471,"coefficients":{"x":-2192.173545599603}},{"state":{"x":11.999999640027978},"intercept":318154.48032411444,"coefficients":{"x":-2082.56487469702}},{"state":{"x":14.699999614642827},"intercept":317969.8947323075,"coefficients":{"x":-1978.436631731095}},{"state":{"x":12.39999968896663},"intercept":327382.67536355474,"coefficients":{"x":-1879.5148160089489}},{"state":{"x":15.09999976029335},"intercept":326961.85251065675,"coefficients":{"x":-1785.5390832839728}},{"state":{"x":12.79999980692474},"intercept":335076.26057549263,"coefficients":{"x":-1696.26213882927}},{"state":{"x":10.499999843376074},"intercept":342392.1178839353,"coefficients":{"x":-1611.4490458236546}},{"state":{"x":11.199999884725267},"intercept":344376.36370736256,"coefficients":{"x":-1530.8766059982174}},{"state":{"x":11.899999910540005},"intercept":346136.4993195876,"coefficients":{"x":-1454.3327875741495}},{"state":{"x":14.59999985552773},"intercept":344926.7429192222,"coefficients":{"x":-1381.616157022172}},{"state":{"x":12.299999932750508},"intercept":350365.5921040412,"coefficients":{"x":-1312.5353603678802}},{"state":{"x":10.334948386612458},"intercept":354810.8841777048,"coefficients":{"x":-1246.9086070411322}},{"state":{"x":3.004845142185769},"intercept":366330.7673540708,"coefficients":{"x":-1978.1877262212095}},{"state":{"x":7.669896762019866},"intercept":360639.35769203975,"coefficients":{"x":-1647.7066444400589}},{"state":{"x":7.334948381000333},"intercept":363387.4724234955,"coefficients":{"x":-1565.3213315329704}},{"state":{"x":0.4675717413976212},"intercept":543940.0600380314,"coefficients":{"x":-318249.9999465836}},{"state":{"x":1.1675716526468756},"intercept":430001.16716146783,"coefficients":{"x":-102224.85177070145}},{"state":{"x":0.4675717417850101},"intercept":552122.1088110429,"coefficients":{"x":-318249.99996644916}},{"state":{"x":1.1675716530342637},"intercept":432881.01582212525,"coefficients":{"x":-102362.49992798879}},{"state":{"x":0.6397937265224641},"intercept":499919.899695919,"coefficients":{"x":-318249.99990357354}},{"state":{"x":1.3397936377715332},"intercept":416094.08683919004,"coefficients":{"x":-102362.49968631772}},{"state":{"x":6.004845241605676},"intercept":367328.4828095429,"coefficients":{"x":-1624.7035012852255}},{"state":{"x":10.66989682793027},"intercept":361966.52507464064,"coefficients":{"x":-1543.4683355389839}},{"state":{"x":10.334948413960763},"intercept":364384.2111041142,"coefficients":{"x":-1466.2949301105264}}],"multi_cuts":[]}] \ No newline at end of file +[{"risk_set_cuts":[],"node":"1","single_cuts":[{"state":{"x":0.0},"intercept":243326.5932873207,"coefficients":{"x":-317616.6666712048}},{"state":{"x":0.0},"intercept":321380.875475521,"coefficients":{"x":-318249.99997748446}},{"state":{"x":0.0},"intercept":346483.98068044573,"coefficients":{"x":-318250.0000028512}},{"state":{"x":0.0},"intercept":354558.1883998782,"coefficients":{"x":-318250.0000165849}},{"state":{"x":0.0},"intercept":357155.19103196025,"coefficients":{"x":-318250.00002266077}},{"state":{"x":0.0},"intercept":357990.49560985994,"coefficients":{"x":-318250.0000249221}},{"state":{"x":0.0},"intercept":358259.16447035177,"coefficients":{"x":-318250.00002568774}},{"state":{"x":1.004836793707554},"intercept":104502.32174045643,"coefficients":{"x":-101729.16675464461}},{"state":{"x":0.6698921986092845},"intercept":148532.27219942765,"coefficients":{"x":-349514.23575976526}},{"state":{"x":0.3349461006592544},"intercept":273330.5249344267,"coefficients":{"x":-349514.23612227093}},{"state":{"x":0.0},"intercept":392846.7328678744,"coefficients":{"x":-349514.236165854}},{"state":{"x":0.6698921986092845},"intercept":159485.0022909803,"coefficients":{"x":-349514.2357600201}},{"state":{"x":0.3349461006592544},"intercept":276798.88946347113,"coefficients":{"x":-349514.2361227015}},{"state":{"x":0.0},"intercept":393945.0483021176,"coefficients":{"x":-349514.2361662153}},{"state":{"x":0.0},"intercept":393969.6622715268,"coefficients":{"x":-349514.23616622365}},{"state":{"x":0.0},"intercept":393977.4566951732,"coefficients":{"x":-349514.23616622627}},{"state":{"x":0.3349461006592544},"intercept":276911.4944092662,"coefficients":{"x":-349514.2361227162}},{"state":{"x":0.0},"intercept":393980.7065349545,"coefficients":{"x":-349514.2361662274}},{"state":{"x":0.0},"intercept":393980.9540452587,"coefficients":{"x":-349514.2361662275}},{"state":{"x":0.6698921986092845},"intercept":159844.1723304849,"coefficients":{"x":-349514.2357600446}},{"state":{"x":0.3349461006592544},"intercept":276912.62664265267,"coefficients":{"x":-349514.23612271633}},{"state":{"x":0.0},"intercept":393981.0650755269,"coefficients":{"x":-349514.2361662275}},{"state":{"x":0.6698921986092845},"intercept":159844.20749007008,"coefficients":{"x":-349514.2357600446}},{"state":{"x":0.3349461006592544},"intercept":276912.6377765217,"coefficients":{"x":-349514.23612271633}},{"state":{"x":0.0},"intercept":393981.0686012518,"coefficients":{"x":-349514.2361662275}},{"state":{"x":0.0},"intercept":393981.06869958586,"coefficients":{"x":-349514.2361662275}},{"state":{"x":0.0},"intercept":393981.06873072503,"coefficients":{"x":-349514.2361662275}},{"state":{"x":0.0},"intercept":393981.0687405858,"coefficients":{"x":-349514.2361662275}},{"state":{"x":0.0},"intercept":393981.0687437084,"coefficients":{"x":-349514.2361662275}},{"state":{"x":0.0},"intercept":393981.0687446971,"coefficients":{"x":-349514.2361662275}},{"state":{"x":1.004836793707554},"intercept":114095.79477302845,"coefficients":{"x":-112262.84149603915}},{"state":{"x":0.6698921986092845},"intercept":159656.11709911487,"coefficients":{"x":-352849.8994007419}},{"state":{"x":0.3349461006592544},"intercept":278489.85457248666,"coefficients":{"x":-352849.8998073284}},{"state":{"x":0.0},"intercept":396880.7655017967,"coefficients":{"x":-352849.899832754}},{"state":{"x":0.3349461006592544},"intercept":278760.0515187867,"coefficients":{"x":-352849.89979030535}},{"state":{"x":0.0},"intercept":396966.3278663192,"coefficients":{"x":-352849.89983277896}},{"state":{"x":0.0},"intercept":396972.8443263268,"coefficients":{"x":-352849.8998327809}},{"state":{"x":0.0},"intercept":396974.9078719961,"coefficients":{"x":-352849.8998327815}},{"state":{"x":0.0},"intercept":396975.5613281245,"coefficients":{"x":-352849.8998327817}},{"state":{"x":0.6698921986092845},"intercept":160604.3730944301,"coefficients":{"x":-352849.8994007543}},{"state":{"x":0.3349461006592544},"intercept":278790.1356378552,"coefficients":{"x":-352849.89979030896}},{"state":{"x":0.0},"intercept":396975.85450402467,"coefficients":{"x":-352849.8998327818}},{"state":{"x":0.3349461006592544},"intercept":278790.1630361588,"coefficients":{"x":-352849.89979030896}},{"state":{"x":0.0},"intercept":396975.8631801542,"coefficients":{"x":-352849.8998327818}},{"state":{"x":0.0},"intercept":396975.8638423745,"coefficients":{"x":-352849.8998327818}},{"state":{"x":1.004836793707554},"intercept":114704.83901016395,"coefficients":{"x":-113319.13498685189}},{"state":{"x":0.6698921986092845},"intercept":160471.93261423847,"coefficients":{"x":-353184.3923256809}},{"state":{"x":0.3349461006592544},"intercept":278798.6538096404,"coefficients":{"x":-353184.3927458069}},{"state":{"x":0.0},"intercept":397105.5682765613,"coefficients":{"x":-353184.39279775607}},{"state":{"x":0.6698921986092845},"intercept":160513.00568440114,"coefficients":{"x":-353184.3923382999}},{"state":{"x":0.3349461006592544},"intercept":278811.6602844246,"coefficients":{"x":-353184.3927543283}},{"state":{"x":0.0},"intercept":397109.6869938252,"coefficients":{"x":-353184.3928110058}},{"state":{"x":0.0},"intercept":397109.77931566356,"coefficients":{"x":-353184.3928110058}},{"state":{"x":0.0},"intercept":397109.8085509142,"coefficients":{"x":-353184.3928110058}},{"state":{"x":0.0},"intercept":397109.81780874403,"coefficients":{"x":-353184.3928110058}},{"state":{"x":0.3349461006592544},"intercept":278812.0855761277,"coefficients":{"x":-353184.3927543284}},{"state":{"x":0.0},"intercept":397109.8216695387,"coefficients":{"x":-353184.3928110058}},{"state":{"x":0.0},"intercept":397109.8219629754,"coefficients":{"x":-353184.3928110058}},{"state":{"x":0.0},"intercept":397109.82205589715,"coefficients":{"x":-353184.3928110058}},{"state":{"x":0.0},"intercept":397109.82208532223,"coefficients":{"x":-353184.3928110058}},{"state":{"x":0.0},"intercept":397109.82209464005,"coefficients":{"x":-353184.3928110058}},{"state":{"x":0.3349461006592544},"intercept":278812.0869333281,"coefficients":{"x":-353184.3927543284}},{"state":{"x":0.0},"intercept":397109.82209931855,"coefficients":{"x":-353184.3928110058}},{"state":{"x":0.0},"intercept":397109.82210467744,"coefficients":{"x":-353184.39282431966}},{"state":{"x":0.004839783237610497},"intercept":395400.4862012116,"coefficients":{"x":-353184.39282317745}},{"state":{"x":2.6698936309934798},"intercept":22732.488087711827,"coefficients":{"x":-36834.39236015842}},{"state":{"x":5.3349460650066005},"intercept":18860.785165229132,"coefficients":{"x":-1583.3333154240474}},{"state":{"x":5.0048452488827815},"intercept":38720.289143969676,"coefficients":{"x":-1636.1111682942442}},{"state":{"x":7.669896832613536},"intercept":52872.06532339254,"coefficients":{"x":-1554.3056582579154}},{"state":{"x":10.334948416405053},"intercept":66342.15776906771,"coefficients":{"x":-1476.5904003870546}},{"state":{"x":1.3140744788089118},"intercept":129051.03581832969,"coefficients":{"x":-113153.38863197068}},{"state":{"x":0.614074569487701},"intercept":226256.72278861314,"coefficients":{"x":-318249.9999496664}},{"state":{"x":1.3140744807342495},"intercept":143835.21868050355,"coefficients":{"x":-102362.49979727317}},{"state":{"x":1.3140744797354362},"intercept":143960.8624891105,"coefficients":{"x":-102362.49980866902}},{"state":{"x":0.2490230439263425},"intercept":363004.4057368755,"coefficients":{"x":-349714.79158868716}},{"state":{"x":0.9490229551780073},"intercept":187843.10078724346,"coefficients":{"x":-112326.35070908452}},{"state":{"x":1.6490228670480749},"intercept":131257.302852938,"coefficients":{"x":-36987.598189618424}},{"state":{"x":1.3140744807344384},"intercept":152993.7609098988,"coefficients":{"x":-123089.089885349}},{"state":{"x":1.6490228670480749},"intercept":137011.05599667164,"coefficients":{"x":-51324.28452930077}},{"state":{"x":1.3140744807344384},"intercept":154202.04228826988,"coefficients":{"x":-51324.28502966446}},{"state":{"x":1.6490228670478857},"intercept":137011.05599699792,"coefficients":{"x":-51324.28450143214}},{"state":{"x":1.3140744807342495},"intercept":154202.0422884318,"coefficients":{"x":-51324.28503207567}},{"state":{"x":1.3140744807342495},"intercept":154202.04228843178,"coefficients":{"x":-51324.285032075684}},{"state":{"x":1.314074474455739},"intercept":154202.04261076736,"coefficients":{"x":-51324.28503451555}},{"state":{"x":0.9189198208138118},"intercept":201632.98844755237,"coefficients":{"x":-123089.09023612597}},{"state":{"x":1.6189197326842222},"intercept":138556.07785778868,"coefficients":{"x":-51324.2846288428}},{"state":{"x":1.2839713472996899},"intercept":156699.1282132047,"coefficients":{"x":-123089.0894500429}},{"state":{"x":1.9839712591642478},"intercept":121629.18802043647,"coefficients":{"x":-40561.54445920297}},{"state":{"x":1.6490228670478857},"intercept":136787.6719139076,"coefficients":{"x":-52456.03426575769}},{"state":{"x":1.3140744807342495},"intercept":154357.73596388218,"coefficients":{"x":-52456.03479760503}},{"state":{"x":1.3140744807342413},"intercept":154357.73596388265,"coefficients":{"x":-52456.03479760503}},{"state":{"x":1.300000088140661},"intercept":155096.02277795118,"coefficients":{"x":-52456.037191062824}},{"state":{"x":0.0},"intercept":462591.94952613197,"coefficients":{"x":-334227.7468600484}},{"state":{"x":0.0},"intercept":466550.4417396543,"coefficients":{"x":-334227.7468628674}},{"state":{"x":0.614074568488888},"intercept":263063.6018753948,"coefficients":{"x":-330461.1558957928}},{"state":{"x":1.3140744797354367},"intercept":161928.44073360594,"coefficients":{"x":-118123.8549125337}},{"state":{"x":0.24902305580807646},"intercept":384608.44973243424,"coefficients":{"x":-333869.35682064574}},{"state":{"x":0.9490229670597415},"intercept":205337.7420128627,"coefficients":{"x":-119203.11879265298}},{"state":{"x":1.64902287892981},"intercept":139134.2100602643,"coefficients":{"x":-51225.47665680049}},{"state":{"x":1.3140744926161738},"intercept":161928.43921153896,"coefficients":{"x":-118123.85499772345}},{"state":{"x":0.6000001768941124},"intercept":268487.8681393218,"coefficients":{"x":-333838.0674648761}},{"state":{"x":1.300000088140661},"intercept":163835.81720046283,"coefficients":{"x":-119193.21020419332}},{"state":{"x":0.0},"intercept":471281.0023550791,"coefficients":{"x":-355361.1831039943}},{"state":{"x":0.614074569487701},"intercept":264907.57214438845,"coefficients":{"x":-333838.06740618404}},{"state":{"x":1.3140744807342495},"intercept":162512.36463813818,"coefficients":{"x":-119193.21033136749}},{"state":{"x":1.3140744807342495},"intercept":162512.364638138,"coefficients":{"x":-119193.21032974064}},{"state":{"x":1.3140744807342497},"intercept":162512.36463814712,"coefficients":{"x":-119193.21024672968}},{"state":{"x":1.3140744788089118},"intercept":162512.36486763431,"coefficients":{"x":-119193.21024672968}},{"state":{"x":0.6140745694875681},"intercept":264907.5721442152,"coefficients":{"x":-333838.0674206987}},{"state":{"x":1.3140744807341167},"intercept":162512.36464456492,"coefficients":{"x":-119193.21025270822}},{"state":{"x":3.0048452039402527},"intercept":104807.11810002883,"coefficients":{"x":-14262.076163390537}},{"state":{"x":2.6698968024929477},"intercept":115436.02333439601,"coefficients":{"x":-21687.724976627484}},{"state":{"x":5.334948401387881},"intercept":93941.62975051024,"coefficients":{"x":-5933.911107793759}},{"state":{"x":10.205473512484808},"intercept":85391.00462297234,"coefficients":{"x":-3296.658792178857}},{"state":{"x":9.870525111905035},"intercept":103094.62932788879,"coefficients":{"x":-3131.8258720180775}},{"state":{"x":9.30000006512732},"intercept":120817.28434237142,"coefficients":{"x":-2975.2345986914065}},{"state":{"x":3.600001869547069},"intercept":152267.6317945382,"coefficients":{"x":-2834.3153149026894}},{"state":{"x":6.300001886729413},"intercept":158711.00835166418,"coefficients":{"x":-2692.5995831270147}},{"state":{"x":4.000001706044817},"intercept":177936.20115206842,"coefficients":{"x":-2557.9698273836993}},{"state":{"x":6.700001712238983},"intercept":183708.7151153544,"coefficients":{"x":-2430.071360560159}},{"state":{"x":7.400001469174623},"intercept":194093.24781929352,"coefficients":{"x":-2308.5678133036067}},{"state":{"x":5.100001141799806},"intercept":210576.4484435483,"coefficients":{"x":-2193.1394607694156}},{"state":{"x":2.8000011320096627},"intercept":226108.6967838704,"coefficients":{"x":-2338.988341180169}},{"state":{"x":5.500000722441809},"intercept":229957.99586697068,"coefficients":{"x":-2222.0389496064677}},{"state":{"x":3.2000006913641177},"intercept":244474.75291148556,"coefficients":{"x":-2357.291276121874}},{"state":{"x":3.900000720576761},"intercept":251844.03013117408,"coefficients":{"x":-2239.4267728549758}},{"state":{"x":6.600000729132857},"intercept":254627.25642725057,"coefficients":{"x":-2127.455452300632}},{"state":{"x":9.300000500852994},"intercept":257519.52499080304,"coefficients":{"x":-2021.082690785446}},{"state":{"x":5.499976634922586},"intercept":272983.2040353046,"coefficients":{"x":-1920.0285655863813}},{"state":{"x":8.199978652078354},"intercept":275268.17675709125,"coefficients":{"x":-1824.0271533046352}},{"state":{"x":10.899980525141737},"intercept":277472.4890626819,"coefficients":{"x":-1732.8258051558787}},{"state":{"x":8.599981661073231},"intercept":287829.4181120329,"coefficients":{"x":-1646.1845285707968}},{"state":{"x":9.299983533997588},"intercept":292595.6406538724,"coefficients":{"x":-1563.8753142208705}},{"state":{"x":6.999984669925197},"intercept":301453.00777988404,"coefficients":{"x":-1485.6815698651685}},{"state":{"x":9.699986614684226},"intercept":302466.4552110373,"coefficients":{"x":-1411.3975025796858}},{"state":{"x":10.39998845916633},"intercept":306136.8753891638,"coefficients":{"x":-1340.8276374865554}},{"state":{"x":13.09998959509798},"intercept":306966.8104423946,"coefficients":{"x":-1273.7862628550233}},{"state":{"x":15.799991393262314},"intercept":307778.7036219864,"coefficients":{"x":-1210.0969554117726}},{"state":{"x":18.499991767755073},"intercept":308572.2867162394,"coefficients":{"x":-1149.5921123520648}},{"state":{"x":16.19999352677947},"intercept":314807.9203860667,"coefficients":{"x":-1092.112512238083}},{"state":{"x":13.899995314049011},"intercept":320478.85459110216,"coefficients":{"x":-1037.5068932748356}},{"state":{"x":11.599997044979672},"intercept":325625.9702836521,"coefficients":{"x":-985.631557063283}},{"state":{"x":12.29999886407037},"intercept":327478.42191149376,"coefficients":{"x":-936.3499869934408}},{"state":{"x":10.334948357558652},"intercept":331532.50871194096,"coefficients":{"x":-889.5324976077011}},{"state":{"x":0.02294612147636601},"intercept":601239.5791263442,"coefficients":{"x":-318249.99989099125}},{"state":{"x":0.7229460327208838},"intercept":427424.67609494354,"coefficients":{"x":-102010.85231635123}},{"state":{"x":0.38799768602576623},"intercept":529750.5708623917,"coefficients":{"x":-318250.0000258104}},{"state":{"x":0.0},"intercept":676370.6600292979,"coefficients":{"x":-349603.4365545106}},{"state":{"x":0.3879976856941409},"intercept":551229.5890205081,"coefficients":{"x":-318250.0000238123}},{"state":{"x":2.3397934142373376},"intercept":342953.0300141731,"coefficients":{"x":-1865.0188238007213}},{"state":{"x":5.0048450610238175},"intercept":341871.8260692935,"coefficients":{"x":-1814.5119457556048}},{"state":{"x":7.669896707471607},"intercept":340809.5317166413,"coefficients":{"x":-1723.7863712560352}},{"state":{"x":7.33494835372553},"intercept":344741.87300721585,"coefficients":{"x":-1637.5970829338714}},{"state":{"x":10.334948426340079},"intercept":343071.0143169347,"coefficients":{"x":-1555.7172435313723}},{"state":{"x":1.6487514592489814},"intercept":375281.05678669625,"coefficients":{"x":-33746.08046039746}},{"state":{"x":1.3138030623186916},"intercept":399961.7762064385,"coefficients":{"x":-102362.49996138341}},{"state":{"x":0.013802974287132973},"intercept":694852.6614270125,"coefficients":{"x":-349714.7916689488}},{"state":{"x":0.7138028855369929},"intercept":471249.6712395425,"coefficients":{"x":-122062.6100634259}},{"state":{"x":1.4138027968165285},"intercept":397232.95597264887,"coefficients":{"x":-40236.49324307761}},{"state":{"x":0.713802845281302},"intercept":474488.9577155114,"coefficients":{"x":-124117.90746217078}},{"state":{"x":1.4138027565608382},"intercept":398393.68400619447,"coefficients":{"x":-40887.33745145729}},{"state":{"x":0.3487513345759198},"intercept":592449.6414375787,"coefficients":{"x":-330247.65690253524}},{"state":{"x":1.0487512458556645},"intercept":437758.0296532116,"coefficients":{"x":-118159.41482320408}},{"state":{"x":1.7487511578256576},"intercept":386239.23136576614,"coefficients":{"x":-39000.481342426865}},{"state":{"x":1.41380275748386},"intercept":399348.647543517,"coefficients":{"x":-39000.48170314647}},{"state":{"x":1.4138027574838603},"intercept":399349.3864917197,"coefficients":{"x":-39000.481703529775}},{"state":{"x":1.4138027968183595},"intercept":399349.3967596941,"coefficients":{"x":-39000.481703535464}},{"state":{"x":0.0},"intercept":711819.4957812838,"coefficients":{"x":-355033.81450459774}},{"state":{"x":0.6000001767516958},"intercept":513923.75686895335,"coefficients":{"x":-329650.1524984372}},{"state":{"x":1.300000088031347},"intercept":411115.919127597,"coefficients":{"x":-105972.54827526107}},{"state":{"x":0.0},"intercept":714112.4313472782,"coefficients":{"x":-351174.6402196134}},{"state":{"x":0.0},"intercept":714838.5275988701,"coefficients":{"x":-351174.6402223222}},{"state":{"x":0.0},"intercept":715068.4580785508,"coefficients":{"x":-351174.6402223823}},{"state":{"x":0.0},"intercept":715141.2693971179,"coefficients":{"x":-351174.6402223997}},{"state":{"x":0.013802934954465013},"intercept":710317.0855970501,"coefficients":{"x":-351174.6402062267}},{"state":{"x":0.7138028462043239},"intercept":480303.85779632046,"coefficients":{"x":-124188.7891428213}},{"state":{"x":1.41380275748386},"intercept":400289.43020662793,"coefficients":{"x":-40909.78376941772}},{"state":{"x":1.41380275748386},"intercept":400289.43020772125,"coefficients":{"x":-40909.78375586384}},{"state":{"x":1.41380275748386},"intercept":400289.4302077212,"coefficients":{"x":-40909.783755863835}},{"state":{"x":1.4138027574838596},"intercept":400289.43020570825,"coefficients":{"x":-40909.783734376775}},{"state":{"x":1.4138027573741583},"intercept":400289.43021458504,"coefficients":{"x":-40909.7837171513}},{"state":{"x":1.3836996542652553},"intercept":402246.0627973583,"coefficients":{"x":-105972.54738223554}},{"state":{"x":2.083699566229923},"intercept":374510.4584965237,"coefficients":{"x":-35141.305678328674}},{"state":{"x":1.7487511545171666},"intercept":386641.4648647261,"coefficients":{"x":-40909.78291210611}},{"state":{"x":1.4138027541753684},"intercept":400344.13131980115,"coefficients":{"x":-40909.78375171291}},{"state":{"x":2.0535964684262544},"intercept":375623.0216618838,"coefficients":{"x":-35141.306316772796}},{"state":{"x":1.7186480570892173},"intercept":387873.9500901606,"coefficients":{"x":-40909.783015618814}},{"state":{"x":1.3836996575737455},"intercept":402301.7374592957,"coefficients":{"x":-105972.54719660824}},{"state":{"x":2.0836995695384135},"intercept":374583.76380074234,"coefficients":{"x":-35141.305663047446}},{"state":{"x":1.748751157825658},"intercept":386642.77004946984,"coefficients":{"x":-40909.78305802272}},{"state":{"x":1.4138027574838596},"intercept":400345.4365037976,"coefficients":{"x":-40909.78377352187}},{"state":{"x":1.4138027570987173},"intercept":400345.4365216913,"coefficients":{"x":-40909.7837603498}},{"state":{"x":0.713802885507701},"intercept":480662.4865203752,"coefficients":{"x":-122966.71679166412}},{"state":{"x":1.4138027967872369},"intercept":400459.00220642315,"coefficients":{"x":-40522.794151635986}},{"state":{"x":1.0487512454618524},"intercept":440759.4734106782,"coefficients":{"x":-116150.62844410918}},{"state":{"x":1.7487511574318462},"intercept":387292.713122005,"coefficients":{"x":-38364.36568769827}},{"state":{"x":1.413802757090048},"intercept":400459.003808214,"coefficients":{"x":-40522.79409648087}},{"state":{"x":1.7487511578256576},"intercept":387292.7131149584,"coefficients":{"x":-38364.365638826705}},{"state":{"x":1.4138027574838592},"intercept":400459.0037972043,"coefficients":{"x":-40522.794083091845}},{"state":{"x":1.4138027570987175},"intercept":400459.0038128112,"coefficients":{"x":-40522.794083091845}},{"state":{"x":0.713802874462685},"intercept":480689.1344069189,"coefficients":{"x":-123987.35203822156}},{"state":{"x":1.4138027857422206},"intercept":400467.44071867876,"coefficients":{"x":-40845.99521813373}},{"state":{"x":0.34875133419077675},"intercept":598118.9177895344,"coefficients":{"x":-329448.7168065573}},{"state":{"x":1.0487512454705217},"intercept":441193.6049341625,"coefficients":{"x":-116086.84046216114}},{"state":{"x":1.7487511574405155},"intercept":387430.1880994154,"coefficients":{"x":-38344.16615679938}},{"state":{"x":1.413802757098717},"intercept":400467.441890101,"coefficients":{"x":-40845.99540578776}},{"state":{"x":0.7138028458191823},"intercept":480734.36710149504,"coefficients":{"x":-123980.9557159982}},{"state":{"x":1.4138027570987175},"intercept":400481.7644521145,"coefficients":{"x":-40843.96990249864}},{"state":{"x":0.713802846204324},"intercept":480734.36705374473,"coefficients":{"x":-123980.955716171}},{"state":{"x":1.4138027574838596},"intercept":400481.7644392922,"coefficients":{"x":-40843.9699015203}},{"state":{"x":1.4138027573741576},"intercept":400481.7644437729,"coefficients":{"x":-40843.96990152031}},{"state":{"x":1.383699657573747},"intercept":402757.5659447013,"coefficients":{"x":-105908.75886726669}},{"state":{"x":2.083699569538415},"intercept":374728.44077769393,"coefficients":{"x":-35121.106084282816}},{"state":{"x":1.7487511578256576},"intercept":387432.96598290314,"coefficients":{"x":-38344.1661638247}},{"state":{"x":1.41380275748386},"intercept":400484.54233579536,"coefficients":{"x":-40843.96988323572}},{"state":{"x":1.41380275748386},"intercept":400484.54233579495,"coefficients":{"x":-40843.96988323551}},{"state":{"x":1.4138027565608382},"intercept":400484.542380679,"coefficients":{"x":-40843.96986598735}},{"state":{"x":0.3487513345759198},"intercept":598163.3449182622,"coefficients":{"x":-329442.3212558633}},{"state":{"x":1.0487512458556645},"intercept":441255.8367966007,"coefficients":{"x":-116078.41865604097}},{"state":{"x":1.7487511578256576},"intercept":387452.6727535068,"coefficients":{"x":-38341.49925742224}},{"state":{"x":1.4138027574838599},"intercept":400484.5423357426,"coefficients":{"x":-40843.969881490004}},{"state":{"x":1.4138027569434342},"intercept":400484.54236524284,"coefficients":{"x":-40843.969864595085}},{"state":{"x":3.0048452358720716},"intercept":362035.1670299714,"coefficients":{"x":-12564.327492764143}},{"state":{"x":5.669896823956976},"intercept":353567.27872252837,"coefficients":{"x":-2075.9771292955184}},{"state":{"x":5.334948411946709},"intercept":357802.2397640563,"coefficients":{"x":-2264.7855483929875}}],"multi_cuts":[]}] \ No newline at end of file diff --git a/previews/PR797/tutorial/decision_hazard/index.html b/previews/PR797/tutorial/decision_hazard/index.html index 9a357b415..982248be1 100644 --- a/previews/PR797/tutorial/decision_hazard/index.html +++ b/previews/PR797/tutorial/decision_hazard/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Here-and-now and hazard-decision

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

SDDP.jl assumes that the agent gets to make a decision after observing the realization of the random variable. This is called a wait-and-see or hazard-decision model. In contrast, you might want your agent to make decisions before observing the random variable. This is called a here-and-now or decision-hazard model.

Info

The terms decision-hazard and hazard-decision from the French hasard, meaning chance. It could also have been translated as uncertainty-decision and decision-uncertainty, but the community seems to have settled on the transliteration hazard instead. We like the hazard-decision and decision-hazard terms because they clearly communicate the order of the decision and the uncertainty.

The purpose of this tutorial is to demonstrate how to model here-and-now decisions in SDDP.jl.

This tutorial uses the following packages:

using SDDP
+

Here-and-now and hazard-decision

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

SDDP.jl assumes that the agent gets to make a decision after observing the realization of the random variable. This is called a wait-and-see or hazard-decision model. In contrast, you might want your agent to make decisions before observing the random variable. This is called a here-and-now or decision-hazard model.

Info

The terms decision-hazard and hazard-decision from the French hasard, meaning chance. It could also have been translated as uncertainty-decision and decision-uncertainty, but the community seems to have settled on the transliteration hazard instead. We like the hazard-decision and decision-hazard terms because they clearly communicate the order of the decision and the uncertainty.

The purpose of this tutorial is to demonstrate how to model here-and-now decisions in SDDP.jl.

This tutorial uses the following packages:

using SDDP
 import HiGHS

Hazard-decision formulation

As an example, we're going to build a standard hydro-thermal scheduling model, with a single hydro-reservoir and a single thermal generation plant. In each of the four stages, we need to choose some mix of u_thermal and u_hydro to meet a demand of 9 units, where unmet demand is penalized at a rate of $500/unit.

hazard_decision = SDDP.LinearPolicyGraph(;
     stages = 4,
     sense = :Min,
@@ -74,4 +74,4 @@
     end
 end
 
-train_and_compute_cost(decision_hazard_2)
Cost = $410.0

Now we find that the cost of choosing the thermal generation before observing the inflow adds a much more reasonable cost of $10.

Summary

To summarize, the difference between here-and-now and wait-and-see variables is a modeling choice.

To create a here-and-now decision, add it as a state variable to the previous stage

In some cases, you'll need to add an additional "first-stage" problem to enable the model to choose an optimal value for the here-and-now decision variable. You do not need to do this if the first stage is deterministic. You must make sure that the subproblem is feasible for all possible incoming values of the here-and-now decision variable.

+train_and_compute_cost(decision_hazard_2)
Cost = $410.0

Now we find that the cost of choosing the thermal generation before observing the inflow adds a much more reasonable cost of $10.

Summary

To summarize, the difference between here-and-now and wait-and-see variables is a modeling choice.

To create a here-and-now decision, add it as a state variable to the previous stage

In some cases, you'll need to add an additional "first-stage" problem to enable the model to choose an optimal value for the here-and-now decision variable. You do not need to do this if the first stage is deterministic. You must make sure that the subproblem is feasible for all possible incoming values of the here-and-now decision variable.

diff --git a/previews/PR797/tutorial/example_milk_producer/35ea2ce1.svg b/previews/PR797/tutorial/example_milk_producer/35ea2ce1.svg deleted file mode 100644 index e7f1b0fea..000000000 --- a/previews/PR797/tutorial/example_milk_producer/35ea2ce1.svg +++ /dev/null @@ -1,544 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/previews/PR797/tutorial/example_milk_producer/692fe2c9.svg b/previews/PR797/tutorial/example_milk_producer/692fe2c9.svg new file mode 100644 index 000000000..19c2c3aca --- /dev/null +++ b/previews/PR797/tutorial/example_milk_producer/692fe2c9.svg @@ -0,0 +1,544 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/example_milk_producer/77967f8c.svg b/previews/PR797/tutorial/example_milk_producer/77967f8c.svg deleted file mode 100644 index a012a0b1f..000000000 --- a/previews/PR797/tutorial/example_milk_producer/77967f8c.svg +++ /dev/null @@ -1,625 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/previews/PR797/tutorial/example_milk_producer/a499b334.svg b/previews/PR797/tutorial/example_milk_producer/a499b334.svg new file mode 100644 index 000000000..f7a7635ef --- /dev/null +++ b/previews/PR797/tutorial/example_milk_producer/a499b334.svg @@ -0,0 +1,144 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/example_milk_producer/aaf230d3.svg b/previews/PR797/tutorial/example_milk_producer/aaf230d3.svg deleted file mode 100644 index 6a76e25fb..000000000 --- a/previews/PR797/tutorial/example_milk_producer/aaf230d3.svg +++ /dev/null @@ -1,148 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/previews/PR797/tutorial/example_milk_producer/f19b31b9.svg b/previews/PR797/tutorial/example_milk_producer/f19b31b9.svg new file mode 100644 index 000000000..bed0cc997 --- /dev/null +++ b/previews/PR797/tutorial/example_milk_producer/f19b31b9.svg @@ -0,0 +1,625 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/example_milk_producer/index.html b/previews/PR797/tutorial/example_milk_producer/index.html index 2e7204959..3a4687ca7 100644 --- a/previews/PR797/tutorial/example_milk_producer/index.html +++ b/previews/PR797/tutorial/example_milk_producer/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Example: the milk producer

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

The purpose of this tutorial is to demonstrate how to fit a Markovian policy graph to a univariate stochastic process.

This tutorial uses the following packages:

using SDDP
+

Example: the milk producer

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

The purpose of this tutorial is to demonstrate how to fit a Markovian policy graph to a univariate stochastic process.

This tutorial uses the following packages:

using SDDP
 import HiGHS
 import Plots

Background

A company produces milk for sale on a spot market each month. The quantity of milk they produce is uncertain, and so too is the price on the spot market. The company can store unsold milk in a stockpile of dried milk powder.

The spot price is determined by an auction system, and so varies from month to month, but demonstrates serial correlation. In each auction, there is sufficient demand that the milk producer finds a buyer for all their milk, regardless of the quantity they supply. Furthermore, the spot price is independent of the milk producer (they are a small player in the market).

The spot price is highly volatile, and is the result of a process that is out of the control of the company. To counteract their price risk, the company engages in a forward contracting programme.

The forward contracting programme is a deal for physical milk four months in the future.

The futures price is the current spot price, plus some forward contango (the buyers gain certainty that they will receive the milk in the future).

In general, the milk company should forward contract (since they reduce their price risk), however they also have production risk. Therefore, it may be the case that they forward contract a fixed amount, but find that they do not produce enough milk to meet the fixed demand. They are then forced to buy additional milk on the spot market.

The goal of the milk company is to choose the extent to which they forward contract in order to maximise (risk-adjusted) revenues, whilst managing their production risk.

A stochastic process for price

It is outside the scope of this tutorial, but assume that we have gone away and analysed historical data to fit a stochastic process to the sequence of monthly auction spot prices.

One plausible model is a multiplicative auto-regressive model of order one, where the white noise term is modeled by a finite distribution of empirical residuals. We can simulate this stochastic process as follows:

function simulator()
     residuals = [0.0987, 0.199, 0.303, 0.412, 0.530, 0.661, 0.814, 1.010, 1.290]
@@ -18,18 +18,18 @@
 end
 
 simulator()
12-element Vector{Float64}:
- 4.877152280220573
- 5.451666811234428
- 5.7082545981469845
- 5.837517440772149
- 5.730359700022198
- 5.98512582337531
- 6.2376372651679075
- 7.082729580481965
- 6.95523332392624
- 6.697990977342939
- 6.392370816658544
- 6.435360954699612

It may be helpful to visualize a number of simulations of the price process:

plot = Plots.plot(
+ 4.656953599739544
+ 4.811127185868925
+ 4.275806768492164
+ 4.173321058323062
+ 4.335185941444355
+ 4.779874307816746
+ 4.2494157871780365
+ 4.3233493291165965
+ 4.579623834332571
+ 4.080110179566259
+ 4.077588528187847
+ 3.8321210612515815

It may be helpful to visualize a number of simulations of the price process:

plot = Plots.plot(
     [simulator() for _ in 1:500];
     color = "gray",
     opacity = 0.2,
@@ -38,7 +38,7 @@
     ylabel = "Price [\$/kg]",
     xlims = (1, 12),
     ylims = (3, 9),
-)
Example block output

The prices gradually revert to the mean of $6/kg, and there is high volatility.

We can't incorporate this price process directly into SDDP.jl, but we can fit a SDDP.MarkovianGraph directly from the simulator:

graph = SDDP.MarkovianGraph(simulator; budget = 30, scenarios = 10_000);

Here budget is the number of nodes in the policy graph, and scenarios is the number of simulations to use when estimating the transition probabilities.

The graph contains too many nodes to be show, but we can plot it:

for ((t, price), edges) in graph.nodes
+)
Example block output

The prices gradually revert to the mean of $6/kg, and there is high volatility.

We can't incorporate this price process directly into SDDP.jl, but we can fit a SDDP.MarkovianGraph directly from the simulator:

graph = SDDP.MarkovianGraph(simulator; budget = 30, scenarios = 10_000);

Here budget is the number of nodes in the policy graph, and scenarios is the number of simulations to use when estimating the transition probabilities.

The graph contains too many nodes to be show, but we can plot it:

for ((t, price), edges) in graph.nodes
     for ((t′, price′), probability) in edges
         Plots.plot!(
             plot,
@@ -50,7 +50,7 @@
     end
 end
 
-plot
Example block output

That looks okay. Try changing budget and scenarios to see how different Markovian policy graphs can be created.

Model

Now that we have a Markovian graph, we can build the model. See if you can work out how we arrived at this formulation by reading the background description. Do all the variables and constraints make sense?

model = SDDP.PolicyGraph(
+plot
Example block output

That looks okay. Try changing budget and scenarios to see how different Markovian policy graphs can be created.

Model

Now that we have a Markovian graph, we can build the model. See if you can work out how we arrived at this formulation by reading the background description. Do all the variables and constraints make sense?

model = SDDP.PolicyGraph(
     graph;
     sense = :Max,
     upper_bound = 1e2,
@@ -111,7 +111,7 @@
     end
     return
 end
A policy graph with 30 nodes.
- Node indices: (1, 4.580033858777197), ..., (12, 7.710437337309783)
+ Node indices: (1, 4.578812406195716), ..., (12, 7.615134118569353)
 

Training a policy

Now we have a model, we train a policy. The SDDP.SimulatorSamplingScheme is used in the forward pass. It generates an out-of-sample sequence of prices using simulator and traverses the closest sequence of nodes in the policy graph. When calling SDDP.parameterize for each subproblem, it uses the new out-of-sample price instead of the price associated with the Markov node.

SDDP.train(
     model;
     time_limit = 20,
@@ -142,31 +142,31 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1  -4.308942e+01  5.911572e+01  1.270255e+00       162   1
-        62   9.058538e+00  7.899015e+00  2.272231e+00     10044   1
-       110   8.905371e+00  7.895026e+00  3.276748e+00     17820   1
-       155   1.013127e+01  7.894128e+00  4.277412e+00     25110   1
-       196   8.491994e+00  7.892279e+00  5.283188e+00     31752   1
-       232   8.834522e+00  7.891669e+00  6.305662e+00     37584   1
-       268   9.792430e+00  7.888880e+00  7.322312e+00     43416   1
-       302   9.310072e+00  7.888246e+00  8.330216e+00     48924   1
-       334   9.923628e+00  7.888055e+00  9.351924e+00     54108   1
-       477   9.766161e+00  7.887904e+00  1.435876e+01     77274   1
-       604   8.483836e+00  7.887751e+00  1.938403e+01     97848   1
-       618   8.351073e+00  7.887745e+00  2.000197e+01    100116   1
+         1  -4.199992e+01  5.821554e+01  1.246103e+00       162   1
+        61   9.458406e+00  7.916707e+00  2.266685e+00      9882   1
+       108   9.785254e+00  7.910481e+00  3.277447e+00     17496   1
+       153   8.746097e+00  7.904751e+00  4.288313e+00     24786   1
+       195   9.180959e+00  7.904578e+00  5.297473e+00     31590   1
+       232   1.010440e+01  7.904209e+00  6.301664e+00     37584   1
+       267   7.456076e+00  7.903732e+00  7.327330e+00     43254   1
+       300   1.026767e+01  7.903401e+00  8.340721e+00     48600   1
+       333   1.213310e+01  7.903401e+00  9.345247e+00     53946   1
+       470   1.002919e+01  7.902829e+00  1.435850e+01     76140   1
+       575   7.612376e+00  7.902545e+00  1.955877e+01     93150   1
+       586   9.359577e+00  7.902545e+00  2.004409e+01     94932   1
 -------------------------------------------------------------------
 status         : time_limit
-total time (s) : 2.000197e+01
-total solves   : 100116
-best bound     :  7.887745e+00
-simulation ci  :  8.871303e+00 ± 3.386698e-01
+total time (s) : 2.004409e+01
+total solves   : 94932
+best bound     :  7.902545e+00
+simulation ci  :  8.754209e+00 ± 3.802230e-01
 numeric issues : 0
 -------------------------------------------------------------------
Warning

We're intentionally terminating the training early so that the documentation doesn't take too long to build. If you run this example locally, increase the time limit.

Simulating the policy

When simulating the policy, we can also use the SDDP.SimulatorSamplingScheme.

simulations = SDDP.simulate(
     model,
     200,
     Symbol[:x_stock, :u_forward_sell, :u_spot_sell, :u_spot_buy];
     sampling_scheme = SDDP.SimulatorSamplingScheme(simulator),
-);

To show how the sampling scheme uses the new out-of-sample price instead of the price associated with the Markov node, compare the index of the Markov state visited in stage 12 of the first simulation:

simulations[1][12][:node_index]
(12, 5.337493515560004)

to the realization of the noise (price, ω) passed to SDDP.parameterize:

simulations[1][12][:noise_term]
(5.577169389040387, 0.2)

Visualizing the policy

Finally, we can plot the policy to gain insight (although note that we terminated the training early, so we should run the re-train the policy for more iterations before making too many judgements).

plot = Plots.plot(
+);

To show how the sampling scheme uses the new out-of-sample price instead of the price associated with the Markov node, compare the index of the Markov state visited in stage 12 of the first simulation:

simulations[1][12][:node_index]
(12, 4.1598246782428845)

to the realization of the noise (price, ω) passed to SDDP.parameterize:

simulations[1][12][:noise_term]
(4.496169473951539, 0.1)

Visualizing the policy

Finally, we can plot the policy to gain insight (although note that we terminated the training early, so we should run the re-train the policy for more iterations before making too many judgements).

plot = Plots.plot(
     SDDP.publication_plot(simulations; title = "x_stock.out") do data
         return data[:x_stock].out
     end,
@@ -180,4 +180,4 @@
         return data[:u_spot_sell]
     end;
     layout = (2, 2),
-)
Example block output

Next steps

  • Train the policy for longer. What do you observe?
  • Try creating different Markovian graphs. What happens if you add more nodes?
  • Try different risk measures
+)
Example block output

Next steps

  • Train the policy for longer. What do you observe?
  • Try creating different Markovian graphs. What happens if you add more nodes?
  • Try different risk measures
diff --git a/previews/PR797/tutorial/example_newsvendor/368ff150.svg b/previews/PR797/tutorial/example_newsvendor/368ff150.svg new file mode 100644 index 000000000..084f7cdef --- /dev/null +++ b/previews/PR797/tutorial/example_newsvendor/368ff150.svg @@ -0,0 +1,37 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/example_newsvendor/868190d0.svg b/previews/PR797/tutorial/example_newsvendor/868190d0.svg deleted file mode 100644 index 970c3d48a..000000000 --- a/previews/PR797/tutorial/example_newsvendor/868190d0.svg +++ /dev/null @@ -1,100 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/previews/PR797/tutorial/example_newsvendor/9b0ef075.svg b/previews/PR797/tutorial/example_newsvendor/9b0ef075.svg deleted file mode 100644 index 3d3ef63ae..000000000 --- a/previews/PR797/tutorial/example_newsvendor/9b0ef075.svg +++ /dev/null @@ -1,37 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/previews/PR797/tutorial/example_newsvendor/c011f69a.svg b/previews/PR797/tutorial/example_newsvendor/c011f69a.svg new file mode 100644 index 000000000..47d36bd22 --- /dev/null +++ b/previews/PR797/tutorial/example_newsvendor/c011f69a.svg @@ -0,0 +1,97 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/example_newsvendor/index.html b/previews/PR797/tutorial/example_newsvendor/index.html index ab07602fb..03fba0fa4 100644 --- a/previews/PR797/tutorial/example_newsvendor/index.html +++ b/previews/PR797/tutorial/example_newsvendor/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Example: two-stage newsvendor

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

The purpose of this tutorial is to demonstrate how to model and solve a two-stage stochastic program.

It is based on the Two stage stochastic programs tutorial in JuMP.

This tutorial uses the following packages

using JuMP
+

Example: two-stage newsvendor

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

The purpose of this tutorial is to demonstrate how to model and solve a two-stage stochastic program.

It is based on the Two stage stochastic programs tutorial in JuMP.

This tutorial uses the following packages

using JuMP
 using SDDP
 import Distributions
 import ForwardDiff
@@ -15,7 +15,7 @@
 d = sort!(rand(D, N));
 Ω = 1:N
 P = fill(1 / N, N);
-StatsPlots.histogram(d; bins = 20, label = "", xlabel = "Demand")
Example block output

Kelley's cutting plane algorithm

Kelley's cutting plane algorithm is an iterative method for maximizing concave functions. Given a concave function $f(x)$, Kelley's constructs an outer-approximation of the function at the minimum by a set of first-order Taylor series approximations (called cuts) constructed at a set of points $k = 1,\ldots,K$:

\[\begin{aligned} +StatsPlots.histogram(d; bins = 20, label = "", xlabel = "Demand")

Example block output

Kelley's cutting plane algorithm

Kelley's cutting plane algorithm is an iterative method for maximizing concave functions. Given a concave function $f(x)$, Kelley's constructs an outer-approximation of the function at the minimum by a set of first-order Taylor series approximations (called cuts) constructed at a set of points $k = 1,\ldots,K$:

\[\begin{aligned} f^K = \max\limits_{\theta \in \mathbb{R}, x \in \mathbb{R}^N} \;\; & \theta\\ & \theta \le f(x_k) + \nabla f(x_k)^\top (x - x_k),\quad k=1,\ldots,K\\ & \theta \le M, @@ -168,60 +168,50 @@ println(" Added cut: $c") end

Solving iteration k = 1
   xᵏ = -0.0
-  V̅ = 1224.7336737228934
+  V̅ = 1230.1709749601537
   V̲ = 0.0
   Added cut: -4.99999999999999 x_out + θ ≤ 0
 Solving iteration k = 2
-  xᵏ = 244.94673474457915
-  V̅ = 734.8402042337351
-  V̲ = 498.45049783743775
-  Added cut: 0.10000000000000007 x_out + θ ≤ 1012.8386408010541
+  xᵏ = 246.03419499203122
+  V̅ = 738.1025849760913
+  V̲ = 499.78400441512
+  Added cut: 0.10000000000000007 x_out + θ ≤ 1016.4558138983856
 Solving iteration k = 3
-  xᵏ = 198.5958119217757
-  V̅ = 595.7874357653252
-  V̲ = 552.1782790488
-  Added cut: -2.602999999999999 x_out + θ ≤ 432.4250044599701
+  xᵏ = 199.30506154870346
+  V̅ = 597.9151846461084
+  V̲ = 553.9784716168931
+  Added cut: -2.2460000000000004 x_out + θ ≤ 504.9494264759136
 Solving iteration k = 4
-  xᵏ = 214.72942520942817
-  V̅ = 561.9068478612551
-  V̲ = 548.1484079262207
-  Added cut: -1.1240000000000003 x_out + θ ≤ 736.2513844096807
+  xᵏ = 218.0334132235601
+  V̅ = 558.5856461289095
+  V̲ = 547.171049439587
+  Added cut: -0.8180000000000002 x_out + θ ≤ 804.8865438698341
 Solving iteration k = 5
-  xᵏ = 205.42689651772199
-  V̅ = 556.2974230601562
-  V̲ = 553.2236606512656
-  Added cut: -1.787000000000001 x_out + θ ≤ 596.9795896095401
+  xᵏ = 210.03999817501426
+  V̅ = 556.6192660269671
+  V̲ = 553.4650242760972
+  Added cut: -1.5830000000000009 x_out + θ ≤ 641.0517035150789
 Solving iteration k = 6
-  xᵏ = 201.66003082055198
-  V̅ = 554.0260030447627
-  V̲ = 553.4153881996327
-  Added cut: -2.2460000000000004 x_out + θ ≤ 503.80702061777873
+  xᵏ = 205.28246913901253
+  V̅ = 555.4489138841109
+  V̲ = 554.6540960475286
+  Added cut: -1.9910000000000012 x_out + θ ≤ 556.501638269779
 Solving iteration k = 7
-  xᵏ = 202.9903463872796
-  V̅ = 553.7426458290497
-  V̲ = 553.6035384124501
-  Added cut: -1.9910000000000012 x_out + θ ≤ 555.4304515299343
+  xᵏ = 202.1655364465318
+  V̅ = 554.6821484417604
+  V̲ = 554.5645097538945
+  Added cut: -2.1440000000000006 x_out + θ ≤ 525.4526725055929
 Solving iteration k = 8
-  xᵏ = 202.44482710649328
-  V̅ = 553.6084480859762
-  V̲ = 553.5730475891601
-  Added cut: -2.1440000000000006 x_out + θ ≤ 524.4209924858228
+  xᵏ = 202.9344167593827
+  V̅ = 554.6752285189449
+  V̲ = 554.6365028969318
+  Added cut: -2.042000000000001 x_out + θ ≤ 546.1132573930369
 Solving iteration k = 9
-  xᵏ = 202.67620290268576
-  V̅ = 553.6063657038104
-  V̲ = 553.5966885772682
-  Added cut: -2.093000000000001 x_out + θ ≤ 534.7478017073201
-Solving iteration k = 10
-  xᵏ = 202.7710766923003
-  V̅ = 553.6055118397039
-  V̲ = 553.6048417358244
-  Added cut: -2.042000000000001 x_out + θ ≤ 545.088456514749
-Solving iteration k = 11
-  xᵏ = 202.78421598401317
-  V̅ = 553.6053935860784
-  V̲ = 553.6053935860784
+  xᵏ = 203.6937426812158
+  V̅ = 554.6683945856485
+  V̲ = 554.6683945856496
 Terminating with near-optimal solution

To get the first-stage solution, we do:

optimize!(model)
-xᵏ = value(x_out)
202.78421598401317

To compute a second-stage solution, we do:

solve_second_stage(xᵏ, 170.0)
(V = 846.7215784015987, λ = -0.1, x = 32.78421598401317, u = 170.0)

Policy Graph

Now let's see how we can formulate and train a policy for the two-stage newsvendor problem using SDDP.jl. Under the hood, SDDP.jl implements the exact algorithm that we just wrote by hand.

model = SDDP.LinearPolicyGraph(;
+xᵏ = value(x_out)
203.6937426812158

To compute a second-stage solution, we do:

solve_second_stage(xᵏ, 170.0)
(V = 846.6306257318785, λ = -0.1, x = 33.69374268121581, u = 170.0)

Policy Graph

Now let's see how we can formulate and train a policy for the two-stage newsvendor problem using SDDP.jl. Under the hood, SDDP.jl implements the exact algorithm that we just wrote by hand.

model = SDDP.LinearPolicyGraph(;
     stages = 2,
     sense = :Max,
     upper_bound = 5 * maximum(d),  # The `M` in θ <= M
@@ -271,87 +261,87 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   0.000000e+00  7.348402e+02  6.433010e-03       103   1
-         2   5.488904e+02  5.957874e+02  2.317810e-02       406   1
-         3   5.957874e+02  5.619068e+02  2.799201e-02       509   1
-         4   5.059133e+02  5.562974e+02  3.268600e-02       612   1
-         5   6.162807e+02  5.540260e+02  3.730011e-02       715   1
-         6   6.049801e+02  5.537426e+02  4.193401e-02       818   1
-         7   6.089710e+02  5.536084e+02  4.673910e-02       921   1
-         8   6.073345e+02  5.536064e+02  5.142188e-02      1024   1
-         9   4.855049e+02  5.536055e+02  5.611801e-02      1127   1
-        10   4.679362e+02  5.536054e+02  6.076694e-02      1230   1
-        11   6.083526e+02  5.536054e+02  6.536603e-02      1333   1
-        12   6.083526e+02  5.536054e+02  7.023501e-02      1436   1
-        13   4.691372e+02  5.536054e+02  7.509899e-02      1539   1
-        14   5.653721e+02  5.536054e+02  7.994604e-02      1642   1
-        15   5.309983e+02  5.536054e+02  8.481002e-02      1745   1
-        16   4.580340e+02  5.536054e+02  8.964396e-02      1848   1
-        17   6.083526e+02  5.536054e+02  9.456801e-02      1951   1
-        18   5.932707e+02  5.536054e+02  9.952307e-02      2054   1
-        19   6.083526e+02  5.536054e+02  1.043780e-01      2157   1
-        20   4.604990e+02  5.536054e+02  1.092670e-01      2260   1
-        21   4.679086e+02  5.536054e+02  1.303971e-01      2563   1
-        22   5.653721e+02  5.536054e+02  1.353321e-01      2666   1
-        23   4.036846e+02  5.536054e+02  1.402020e-01      2769   1
-        24   5.971798e+02  5.536054e+02  1.451249e-01      2872   1
-        25   6.083526e+02  5.536054e+02  1.500471e-01      2975   1
-        26   6.083526e+02  5.536054e+02  1.550000e-01      3078   1
-        27   6.083526e+02  5.536054e+02  1.599169e-01      3181   1
-        28   5.215448e+02  5.536054e+02  1.648250e-01      3284   1
-        29   6.042291e+02  5.536054e+02  1.697831e-01      3387   1
-        30   4.627215e+02  5.536054e+02  1.747749e-01      3490   1
-        31   6.083526e+02  5.536054e+02  1.797011e-01      3593   1
-        32   6.083526e+02  5.536054e+02  1.846240e-01      3696   1
-        33   6.083526e+02  5.536054e+02  1.895890e-01      3799   1
-        34   5.722953e+02  5.536054e+02  1.945641e-01      3902   1
-        35   4.857658e+02  5.536054e+02  1.995211e-01      4005   1
-        36   6.083526e+02  5.536054e+02  2.044399e-01      4108   1
-        37   6.083526e+02  5.536054e+02  2.093799e-01      4211   1
-        38   4.537736e+02  5.536054e+02  2.142861e-01      4314   1
-        39   5.990232e+02  5.536054e+02  2.191470e-01      4417   1
-        40   6.083526e+02  5.536054e+02  2.240000e-01      4520   1
+         1   0.000000e+00  7.381026e+02  6.302834e-03       103   1
+         2   4.636886e+02  5.979152e+02  2.295399e-02       406   1
+         3   5.717418e+02  5.585856e+02  2.751398e-02       509   1
+         4   6.541002e+02  5.566193e+02  3.217483e-02       612   1
+         5   6.301200e+02  5.554489e+02  3.699088e-02       715   1
+         6   4.696176e+02  5.546821e+02  4.157782e-02       818   1
+         7   5.037949e+02  5.546752e+02  4.625177e-02       921   1
+         8   4.220417e+02  5.546684e+02  5.082083e-02      1024   1
+         9   5.869911e+02  5.546684e+02  5.536699e-02      1127   1
+        10   6.110812e+02  5.546684e+02  5.999398e-02      1230   1
+        11   5.069773e+02  5.546684e+02  6.511593e-02      1333   1
+        12   6.110812e+02  5.546684e+02  7.001781e-02      1436   1
+        13   6.110812e+02  5.546684e+02  7.483697e-02      1539   1
+        14   4.273511e+02  5.546684e+02  7.967782e-02      1642   1
+        15   5.555456e+02  5.546684e+02  8.452797e-02      1745   1
+        16   6.110812e+02  5.546684e+02  8.954000e-02      1848   1
+        17   6.037710e+02  5.546684e+02  9.435081e-02      1951   1
+        18   5.005857e+02  5.546684e+02  9.913182e-02      2054   1
+        19   4.872447e+02  5.546684e+02  1.039088e-01      2157   1
+        20   6.110812e+02  5.546684e+02  1.087809e-01      2260   1
+        21   6.110812e+02  5.546684e+02  1.276228e-01      2563   1
+        22   4.828795e+02  5.546684e+02  1.324539e-01      2666   1
+        23   6.110812e+02  5.546684e+02  1.373909e-01      2769   1
+        24   6.110812e+02  5.546684e+02  1.421468e-01      2872   1
+        25   5.271693e+02  5.546684e+02  1.468759e-01      2975   1
+        26   6.110812e+02  5.546684e+02  1.516318e-01      3078   1
+        27   3.831967e+02  5.546684e+02  2.804129e-01      3181   1
+        28   6.110812e+02  5.546684e+02  2.856400e-01      3284   1
+        29   4.179279e+02  5.546684e+02  2.908139e-01      3387   1
+        30   5.388067e+02  5.546684e+02  2.960649e-01      3490   1
+        31   3.831967e+02  5.546684e+02  3.013270e-01      3593   1
+        32   4.334630e+02  5.546684e+02  3.064640e-01      3696   1
+        33   4.958795e+02  5.546684e+02  3.117158e-01      3799   1
+        34   4.945721e+02  5.546684e+02  3.169348e-01      3902   1
+        35   4.501120e+02  5.546684e+02  3.220439e-01      4005   1
+        36   3.831967e+02  5.546684e+02  3.272078e-01      4108   1
+        37   5.869911e+02  5.546684e+02  3.323789e-01      4211   1
+        38   5.446092e+02  5.546684e+02  3.375349e-01      4314   1
+        39   4.893662e+02  5.546684e+02  3.426979e-01      4417   1
+        40   6.110812e+02  5.546684e+02  3.478260e-01      4520   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 2.240000e-01
+total time (s) : 3.478260e-01
 total solves   : 4520
-best bound     :  5.536054e+02
-simulation ci  :  5.440248e+02 ± 3.359064e+01
+best bound     :  5.546684e+02
+simulation ci  :  5.179208e+02 ± 3.581163e+01
 numeric issues : 0
--------------------------------------------------------------------

One way to query the optimal policy is with SDDP.DecisionRule:

first_stage_rule = SDDP.DecisionRule(model; node = 1)
A decision rule for node 1
solution_1 = SDDP.evaluate(first_stage_rule; incoming_state = Dict(:x => 0.0))
(stage_objective = -405.56843196806784, outgoing_state = Dict(:x => 202.78421598403392), controls = Dict{Any, Any}())

Here's the second stage:

second_stage_rule = SDDP.DecisionRule(model; node = 2)
+-------------------------------------------------------------------

One way to query the optimal policy is with SDDP.DecisionRule:

first_stage_rule = SDDP.DecisionRule(model; node = 1)
A decision rule for node 1
solution_1 = SDDP.evaluate(first_stage_rule; incoming_state = Dict(:x => 0.0))
(stage_objective = -407.3874853624126, outgoing_state = Dict(:x => 203.6937426812063), controls = Dict{Any, Any}())

Here's the second stage:

second_stage_rule = SDDP.DecisionRule(model; node = 2)
 solution = SDDP.evaluate(
     second_stage_rule;
     incoming_state = Dict(:x => solution_1.outgoing_state[:x]),
     noise = 170.0,  # A value of d[ω], can be out-of-sample.
     controls_to_record = [:u_sell],
-)
(stage_objective = 846.7215784015966, outgoing_state = Dict(:x => 32.78421598403392), controls = Dict(:u_sell => 170.0))

Simulation

Querying the decision rules is tedious. It's often more useful to simulate the policy:

simulations = SDDP.simulate(
+)
(stage_objective = 846.6306257318794, outgoing_state = Dict(:x => 33.69374268120629), controls = Dict(:u_sell => 170.0))

Simulation

Querying the decision rules is tedious. It's often more useful to simulate the policy:

simulations = SDDP.simulate(
     model,
     10,  #= number of replications =#
     [:x, :u_sell, :u_make];  #= variables to record =#
     skip_undefined_variables = true,
 );

simulations is a vector with 10 elements

length(simulations)
10

and each element is a vector with two elements (one for each stage)

length(simulations[1])
2

The first stage contains:

simulations[1][1]
Dict{Symbol, Any} with 9 entries:
-  :u_make          => 202.784
-  :bellman_term    => 959.174
+  :u_make          => 203.694
+  :bellman_term    => 962.056
   :noise_term      => nothing
   :node_index      => 1
-  :stage_objective => -405.568
+  :stage_objective => -407.387
   :objective_state => nothing
   :u_sell          => NaN
   :belief          => Dict(1=>1.0)
-  :x               => State{Float64}(0.0, 202.784)

The second stage contains:

simulations[1][2]
Dict{Symbol, Any} with 9 entries:
+  :x               => State{Float64}(0.0, 203.694)

The second stage contains:

simulations[1][2]
Dict{Symbol, Any} with 9 entries:
   :u_make          => NaN
   :bellman_term    => 0.0
-  :noise_term      => 169.145
+  :noise_term      => 188.923
   :node_index      => 2
-  :stage_objective => 842.362
+  :stage_objective => 943.137
   :objective_state => nothing
-  :u_sell          => 169.145
+  :u_sell          => 188.923
   :belief          => Dict(2=>1.0)
-  :x               => State{Float64}(202.784, 33.6391)

We can compute aggregated statistics across the simulations:

objectives = map(simulations) do simulation
+  :x               => State{Float64}(203.694, 14.771)

We can compute aggregated statistics across the simulations:

objectives = map(simulations) do simulation
     return sum(data[:stage_objective] for data in simulation)
 end
 μ, t = SDDP.confidence_interval(objectives)
-println("Simulation ci : $μ ± $t")
Simulation ci : 553.947918458339 ± 41.722717369371274

Risk aversion revisited

SDDP.jl contains a number of risk measures. One example is:

0.5 * SDDP.Expectation() + 0.5 * SDDP.WorstCase()
A convex combination of 0.5 * SDDP.Expectation() + 0.5 * SDDP.WorstCase()

You can construct a risk-averse policy by passing a risk measure to the risk_measure keyword argument of SDDP.train.

We can explore how the optimal decision changes with risk by creating a function:

function solve_newsvendor(risk_measure::SDDP.AbstractRiskMeasure)
+println("Simulation ci : $μ ± $t")
Simulation ci : 549.5972318581147 ± 48.768902289249986

Risk aversion revisited

SDDP.jl contains a number of risk measures. One example is:

0.5 * SDDP.Expectation() + 0.5 * SDDP.WorstCase()
A convex combination of 0.5 * SDDP.Expectation() + 0.5 * SDDP.WorstCase()

You can construct a risk-averse policy by passing a risk measure to the risk_measure keyword argument of SDDP.train.

We can explore how the optimal decision changes with risk by creating a function:

function solve_newsvendor(risk_measure::SDDP.AbstractRiskMeasure)
     model = SDDP.LinearPolicyGraph(;
         stages = 2,
         sense = :Max,
@@ -377,7 +367,7 @@
     first_stage_rule = SDDP.DecisionRule(model; node = 1)
     solution = SDDP.evaluate(first_stage_rule; incoming_state = Dict(:x => 0.0))
     return solution.outgoing_state[:x]
-end
solve_newsvendor (generic function with 1 method)

Now we can see how many units a decision maker would order using CVaR:

solve_newsvendor(SDDP.CVaR(0.4))
178.92655407252334

as well as a decision-maker who cares only about the worst-case outcome:

solve_newsvendor(SDDP.WorstCase())
158.61371541755105

In general, the decision-maker will be somewhere between the two extremes. The SDDP.Entropic risk measure is a risk measure that has a single parameter that lets us explore the space of policies between the two extremes. When the parameter is small, the measure acts like SDDP.Expectation, and when it is large, it acts like SDDP.WorstCase.

Here is what we get if we solve our problem multiple times for different values of the risk aversion parameter $\gamma$:

Γ = [10^i for i in -4:0.5:1]
+end
solve_newsvendor (generic function with 1 method)

Now we can see how many units a decision maker would order using CVaR:

solve_newsvendor(SDDP.CVaR(0.4))
181.5219855401493

as well as a decision-maker who cares only about the worst-case outcome:

solve_newsvendor(SDDP.WorstCase())
159.01049610940416

In general, the decision-maker will be somewhere between the two extremes. The SDDP.Entropic risk measure is a risk measure that has a single parameter that lets us explore the space of policies between the two extremes. When the parameter is small, the measure acts like SDDP.Expectation, and when it is large, it acts like SDDP.WorstCase.

Here is what we get if we solve our problem multiple times for different values of the risk aversion parameter $\gamma$:

Γ = [10^i for i in -4:0.5:1]
 buy = [solve_newsvendor(SDDP.Entropic(γ)) for γ in Γ]
 Plots.plot(
     Γ,
@@ -386,4 +376,4 @@
     xlabel = "Risk aversion parameter γ",
     ylabel = "Number of pies to make",
     legend = false,
-)
Example block output

Things to try

There are a number of things you can try next:

  • Experiment with different buy and sales prices
  • Experiment with different distributions of demand
  • Explore how the optimal policy changes if you use a different risk measure
  • What happens if you can only buy and sell integer numbers of newspapers? Try this by adding Int to the variable definitions: @variable(subproblem, buy >= 0, Int)
  • What happens if you use a different upper bound? Try an invalid one like -100, and a very large one like 1e12.
+)
Example block output

Things to try

There are a number of things you can try next:

diff --git a/previews/PR797/tutorial/example_reservoir/0c9b580a.svg b/previews/PR797/tutorial/example_reservoir/0c9b580a.svg deleted file mode 100644 index be2384a2e..000000000 --- a/previews/PR797/tutorial/example_reservoir/0c9b580a.svg +++ /dev/null @@ -1,86 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/previews/PR797/tutorial/example_reservoir/f6caca7e.svg b/previews/PR797/tutorial/example_reservoir/1808f44e.svg similarity index 84% rename from previews/PR797/tutorial/example_reservoir/f6caca7e.svg rename to previews/PR797/tutorial/example_reservoir/1808f44e.svg index b8e9739fb..d6cbf7727 100644 --- a/previews/PR797/tutorial/example_reservoir/f6caca7e.svg +++ b/previews/PR797/tutorial/example_reservoir/1808f44e.svg @@ -1,52 +1,52 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/example_reservoir/4cb1679b.svg b/previews/PR797/tutorial/example_reservoir/3623a425.svg similarity index 85% rename from previews/PR797/tutorial/example_reservoir/4cb1679b.svg rename to previews/PR797/tutorial/example_reservoir/3623a425.svg index d8a619456..6bc1b1c67 100644 --- a/previews/PR797/tutorial/example_reservoir/4cb1679b.svg +++ b/previews/PR797/tutorial/example_reservoir/3623a425.svg @@ -1,46 +1,46 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/example_reservoir/03e69e0e.svg b/previews/PR797/tutorial/example_reservoir/3bbadfc7.svg similarity index 71% rename from previews/PR797/tutorial/example_reservoir/03e69e0e.svg rename to previews/PR797/tutorial/example_reservoir/3bbadfc7.svg index 3e4aa1bad..bad791c68 100644 --- a/previews/PR797/tutorial/example_reservoir/03e69e0e.svg +++ b/previews/PR797/tutorial/example_reservoir/3bbadfc7.svg @@ -1,148 +1,148 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/example_reservoir/3915e49d.svg b/previews/PR797/tutorial/example_reservoir/60d3ffea.svg similarity index 85% rename from previews/PR797/tutorial/example_reservoir/3915e49d.svg rename to previews/PR797/tutorial/example_reservoir/60d3ffea.svg index aa350a6c9..4abc07ffd 100644 --- a/previews/PR797/tutorial/example_reservoir/3915e49d.svg +++ b/previews/PR797/tutorial/example_reservoir/60d3ffea.svg @@ -1,46 +1,46 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/example_reservoir/9ced3c61.svg b/previews/PR797/tutorial/example_reservoir/9ced3c61.svg new file mode 100644 index 000000000..07cc6888d --- /dev/null +++ b/previews/PR797/tutorial/example_reservoir/9ced3c61.svg @@ -0,0 +1,86 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/example_reservoir/64d71310.svg b/previews/PR797/tutorial/example_reservoir/bca9ef48.svg similarity index 84% rename from previews/PR797/tutorial/example_reservoir/64d71310.svg rename to previews/PR797/tutorial/example_reservoir/bca9ef48.svg index 23b45b646..d52eca5aa 100644 --- a/previews/PR797/tutorial/example_reservoir/64d71310.svg +++ b/previews/PR797/tutorial/example_reservoir/bca9ef48.svg @@ -1,109 +1,109 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/example_reservoir/5c2376a4.svg b/previews/PR797/tutorial/example_reservoir/d69263ec.svg similarity index 85% rename from previews/PR797/tutorial/example_reservoir/5c2376a4.svg rename to previews/PR797/tutorial/example_reservoir/d69263ec.svg index c847fa637..ab0427f40 100644 --- a/previews/PR797/tutorial/example_reservoir/5c2376a4.svg +++ b/previews/PR797/tutorial/example_reservoir/d69263ec.svg @@ -1,52 +1,52 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/example_reservoir/index.html b/previews/PR797/tutorial/example_reservoir/index.html index 7a3040e36..f50478834 100644 --- a/previews/PR797/tutorial/example_reservoir/index.html +++ b/previews/PR797/tutorial/example_reservoir/index.html @@ -3,13 +3,13 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Example: deterministic to stochastic

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

The purpose of this tutorial is to explain how we can go from a deterministic time-staged optimal control model in JuMP to a multistage stochastic optimization model in SDDP.jl. As a motivating problem, we consider the hydro-thermal problem with a single reservoir.

Packages

This tutorial requires the following packages:

using JuMP
+

Example: deterministic to stochastic

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

The purpose of this tutorial is to explain how we can go from a deterministic time-staged optimal control model in JuMP to a multistage stochastic optimization model in SDDP.jl. As a motivating problem, we consider the hydro-thermal problem with a single reservoir.

Packages

This tutorial requires the following packages:

using JuMP
 using SDDP
 import CSV
 import DataFrames
 import HiGHS
 import Plots

Data

First, we need some data for the problem. For this tutorial, we'll write CSV files to a temporary directory from Julia. If you have an existing file, you could change the filename to point to that instead.

dir = mktempdir()
-filename = joinpath(dir, "example_reservoir.csv")
"/tmp/jl_uc6GtK/example_reservoir.csv"

Here is the data

csv_data = """
+filename = joinpath(dir, "example_reservoir.csv")
"/tmp/jl_2rREYD/example_reservoir.csv"

Here is the data

csv_data = """
 week,inflow,demand,cost
 1,3,7,10.2\n2,2,7.1,10.4\n3,3,7.2,10.6\n4,2,7.3,10.9\n5,3,7.4,11.2\n
 6,2,7.6,11.5\n7,3,7.8,11.9\n8,2,8.1,12.3\n9,3,8.3,12.7\n10,2,8.6,13.1\n
@@ -29,7 +29,7 @@
     Plots.plot(data[!, :cost]; ylabel = "Cost", xlabel = "Week");
     layout = (3, 1),
     legend = false,
-)
Example block output

The number of weeks will be useful later:

T = size(data, 1)
52

Deterministic JuMP model

To start, we construct a deterministic model in pure JuMP.

Create a JuMP model, using HiGHS as the optimizer:

model = Model(HiGHS.Optimizer)
+)
Example block output

The number of weeks will be useful later:

T = size(data, 1)
52

Deterministic JuMP model

To start, we construct a deterministic model in pure JuMP.

Create a JuMP model, using HiGHS as the optimizer:

model = Model(HiGHS.Optimizer)
 set_silent(model)

x_storage[t]: the amount of water in the reservoir at the start of stage t:

reservoir_max = 320.0
 @variable(model, 0 <= x_storage[1:T+1] <= reservoir_max)
53-element Vector{VariableRef}:
  x_storage[1]
@@ -197,13 +197,13 @@
   Dual objective value : 6.82910e+02
 
 * Work counters
-  Solve time (sec)   : 8.59976e-04
+  Solve time (sec)   : 8.78811e-04
   Simplex iterations : 53
   Barrier iterations : 0
   Node count         : -1
 

The total cost is:

objective_value(model)
682.9099999999999

Here's a plot of demand and generation:

Plots.plot(data[!, :demand]; label = "Demand", xlabel = "Week")
 Plots.plot!(value.(u_thermal); label = "Thermal")
-Plots.plot!(value.(u_flow); label = "Hydro")
Example block output

And here's the storage over time:

Plots.plot(value.(x_storage); label = "Storage", xlabel = "Week")
Example block output

Deterministic SDDP model

For the next step, we show how to decompose our JuMP model into SDDP.jl. It should obtain the same solution.

model = SDDP.LinearPolicyGraph(;
+Plots.plot!(value.(u_flow); label = "Hydro")
Example block output

And here's the storage over time:

Plots.plot(value.(x_storage); label = "Storage", xlabel = "Week")
Example block output

Deterministic SDDP model

For the next step, we show how to decompose our JuMP model into SDDP.jl. It should obtain the same solution.

model = SDDP.LinearPolicyGraph(;
     stages = T,
     sense = :Min,
     lower_bound = 0.0,
@@ -252,11 +252,11 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   1.079600e+03  3.157700e+02  4.319191e-02       104   1
-        10   6.829100e+02  6.829100e+02  1.408200e-01      1040   1
+         1   1.079600e+03  3.157700e+02  4.396701e-02       104   1
+        10   6.829100e+02  6.829100e+02  1.422491e-01      1040   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 1.408200e-01
+total time (s) : 1.422491e-01
 total solves   : 1040
 best bound     :  6.829100e+02
 simulation ci  :  7.289889e+02 ± 7.726064e+01
@@ -279,9 +279,9 @@
 
 Plots.plot(data[!, :demand]; label = "Demand", xlabel = "Week")
 Plots.plot!(r_sim; label = "Thermal")
-Plots.plot!(u_sim; label = "Hydro")
Example block output

Perfect. That's the same as we got before.

Now let's look at x_storage. This is a little more complicated, because we need to grab the outgoing value of the state variable in each stage:

x_sim = [sim[:x_storage].out for sim in simulations[1]]
+Plots.plot!(u_sim; label = "Hydro")
Example block output

Perfect. That's the same as we got before.

Now let's look at x_storage. This is a little more complicated, because we need to grab the outgoing value of the state variable in each stage:

x_sim = [sim[:x_storage].out for sim in simulations[1]]
 
-Plots.plot(x_sim; label = "Storage", xlabel = "Week")
Example block output

Stochastic SDDP model

Now we add some randomness to our model. In each stage, we assume that the inflow could be: 2 units lower, with 30% probability; the same as before, with 40% probability; or 5 units higher, with 30% probability.

model = SDDP.LinearPolicyGraph(;
+Plots.plot(x_sim; label = "Storage", xlabel = "Week")
Example block output

Stochastic SDDP model

Now we add some randomness to our model. In each stage, we assume that the inflow could be: 2 units lower, with 30% probability; the same as before, with 40% probability; or 5 units higher, with 30% probability.

model = SDDP.LinearPolicyGraph(;
     stages = T,
     sense = :Min,
     lower_bound = 0.0,
@@ -335,23 +335,23 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   0.000000e+00  0.000000e+00  4.332209e-02       208   1
-        43   3.147876e+02  2.482960e+02  1.051473e+00      8944   1
-        81   2.500700e+02  2.633541e+02  2.073444e+00     16848   1
-       100   7.140000e+01  2.678968e+02  2.622479e+00     20800   1
+         1   0.000000e+00  0.000000e+00  4.392004e-02       208   1
+        47   3.068656e+02  2.506129e+02  1.065994e+00      9776   1
+        82   2.220767e+02  2.649358e+02  2.067326e+00     17056   1
+       100   4.106203e+02  2.693178e+02  2.582217e+00     20800   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 2.622479e+00
+total time (s) : 2.582217e+00
 total solves   : 20800
-best bound     :  2.678968e+02
-simulation ci  :  2.990844e+02 ± 4.412856e+01
+best bound     :  2.693178e+02
+simulation ci  :  2.763455e+02 ± 3.951780e+01
 numeric issues : 0
 -------------------------------------------------------------------

Now simulate the policy. This time we do 100 replications because the policy is now stochastic instead of deterministic:

simulations =
     SDDP.simulate(model, 100, [:x_storage, :u_flow, :u_thermal, :ω_inflow]);

And let's plot the use of thermal generation in each replication:

plot = Plots.plot(data[!, :demand]; label = "Demand", xlabel = "Week")
 for simulation in simulations
     Plots.plot!(plot, [sim[:u_thermal] for sim in simulation]; label = "")
 end
-plot
Example block output

Viewing an interpreting static plots like this is difficult, particularly as the number of simulations grows. SDDP.jl includes an interactive SpaghettiPlot that makes things easier:

plot = SDDP.SpaghettiPlot(simulations)
+plot
Example block output

Viewing an interpreting static plots like this is difficult, particularly as the number of simulations grows. SDDP.jl includes an interactive SpaghettiPlot that makes things easier:

plot = SDDP.SpaghettiPlot(simulations)
 SDDP.add_spaghetti(plot; title = "Storage") do sim
     return sim[:x_storage].out
 end
@@ -427,41 +427,42 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   2.077700e+04  1.672058e+04  1.148610e-01      1043   1
-         4   2.371448e+05  8.790687e+04  1.158474e+00     12492   1
-         9   3.554457e+05  9.269712e+04  3.090096e+00     32267   1
-        14   2.087393e+05  9.317022e+04  5.664502e+00     48506   1
-        15   4.645476e+05  9.323300e+04  8.595140e+00     68477   1
-        23   9.164029e+04  9.336212e+04  1.415758e+01    101781   1
-        31   8.922762e+04  9.336533e+04  1.948426e+01    129469   1
-        34   1.646510e+05  9.337466e+04  2.591307e+01    158806   1
-        42   7.816923e+04  9.337715e+04  3.137683e+01    181710   1
-        45   1.521806e+05  9.337999e+04  3.712482e+01    203767   1
-        48   3.739193e+05  9.338241e+04  4.592514e+01    234560   1
-        52   8.064625e+04  9.338499e+04  5.196300e+01    254540   1
-        56   2.230293e+05  9.338634e+04  5.722569e+01    270776   1
-        60   3.795747e+05  9.338706e+04  6.516237e+01    293668   1
-        63   3.962255e+05  9.338735e+04  7.354839e+01    317389   1
-        67   2.279541e+05  9.338929e+04  7.977527e+01    334665   1
-        75   8.334185e+04  9.339061e+04  8.558623e+01    349873   1
-        80   7.289020e+04  9.339111e+04  9.108056e+01    363616   1
-        86   1.592304e+05  9.339144e+04  9.888962e+01    382146   1
-        93   9.195856e+04  9.339201e+04  1.049164e+02    396103   1
-        95   1.690896e+05  9.339230e+04  1.100594e+02    407757   1
-        96   3.122782e+05  9.339230e+04  1.160097e+02    421072   1
-       100   4.335252e+03  9.339255e+04  1.225263e+02    435020   1
+         1   8.896140e+04  5.355877e+04  3.290091e-01      3747   1
+         4   5.114702e+04  8.712480e+04  1.375316e+00     14988   1
+         8   1.690164e+05  9.103990e+04  2.807647e+00     28728   1
+        12   1.891036e+05  9.258501e+04  4.405667e+00     42468   1
+        14   1.793613e+05  9.297886e+04  5.861146e+00     53290   1
+        15   2.932641e+05  9.328229e+04  7.627236e+00     65981   1
+        17   2.262467e+05  9.333438e+04  9.302794e+00     76595   1
+        24   8.157553e+04  9.335834e+04  1.438460e+01    105736   1
+        32   4.245414e+04  9.336878e+04  1.961926e+01    131968   1
+        41   3.636223e+05  9.337191e+04  2.493370e+01    155499   1
+        47   6.304775e+04  9.337416e+04  3.043700e+01    177565   1
+        49   3.958071e+05  9.337891e+04  3.554630e+01    196291   1
+        53   2.541544e+05  9.337995e+04  4.204532e+01    219183   1
+        60   1.272915e+05  9.338230e+04  4.841793e+01    240420   1
+        65   3.349013e+05  9.338373e+04  5.545933e+01    262067   1
+        73   1.032922e+05  9.338608e+04  6.069484e+01    277483   1
+        75   4.508955e+05  9.338630e+04  6.954851e+01    302033   1
+        78   1.306855e+05  9.338652e+04  7.492027e+01    316394   1
+        83   1.617044e+05  9.338707e+04  8.168241e+01    333881   1
+        89   3.304372e+05  9.338844e+04  9.095024e+01    356571   1
+        92   2.115612e+05  9.338923e+04  9.950628e+01    376548   1
+        96   1.860008e+05  9.339074e+04  1.081299e+02    394448   1
+        99   3.774533e+04  9.339173e+04  1.133516e+02    405897   1
+       100   2.990300e+04  9.339187e+04  1.140218e+02    407356   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 1.225263e+02
-total solves   : 435020
-best bound     :  9.339255e+04
-simulation ci  :  9.780506e+04 ± 2.038111e+04
+total time (s) : 1.140218e+02
+total solves   : 407356
+best bound     :  9.339187e+04
+simulation ci  :  9.160261e+04 ± 1.843432e+04
 numeric issues : 0
 -------------------------------------------------------------------

When we simulate now, each trajectory will be a different length, because each cycle has a 95% probability of continuing and a 5% probability of stopping.

simulations = SDDP.simulate(model, 3);
 length.(simulations)
3-element Vector{Int64}:
-  208
- 1508
-  780

We can simulate a fixed number of cycles by passing a sampling_scheme:

simulations = SDDP.simulate(
+  988
+ 2184
+  728

We can simulate a fixed number of cycles by passing a sampling_scheme:

simulations = SDDP.simulate(
     model,
     100,
     [:x_storage, :u_flow];
@@ -498,4 +499,4 @@
         return sim[:u_flow]
     end;
     layout = (2, 1),
-)
Example block output

Next steps

Our model is very basic. There are many aspects that we could improve:

  • Can you add a second reservoir to make a river chain?

  • Can you modify the problem and data to use proper units, including a conversion between the volume of water flowing through the turbine and the electrical power output?

+)
Example block output

Next steps

Our model is very basic. There are many aspects that we could improve:

  • Can you add a second reservoir to make a river chain?

  • Can you modify the problem and data to use proper units, including a conversion between the volume of water flowing through the turbine and the electrical power output?

diff --git a/previews/PR797/tutorial/first_steps/index.html b/previews/PR797/tutorial/first_steps/index.html index 8ad6e1ed6..f8e4ecd15 100644 --- a/previews/PR797/tutorial/first_steps/index.html +++ b/previews/PR797/tutorial/first_steps/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

An introduction to SDDP.jl

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

SDDP.jl is a solver for multistage stochastic optimization problems. By multistage, we mean problems in which an agent makes a sequence of decisions over time. By stochastic, we mean that the agent is making decisions in the presence of uncertainty that is gradually revealed over the multiple stages.

Tip

Multistage stochastic programming has a lot in common with fields like stochastic optimal control, approximate dynamic programming, Markov decision processes, and reinforcement learning. If it helps, you can think of SDDP as Q-learning in which we approximate the value function using linear programming duality.

This tutorial is in two parts. First, it is an introduction to the background notation and theory we need, and second, it solves a simple multistage stochastic programming problem.

What is a node?

A common feature of multistage stochastic optimization problems is that they model an agent controlling a system over time. To simplify things initially, we're going to start by describing what happens at an instant in time at which the agent makes a decision. Only after this will we extend our problem to multiple stages and the notion of time.

A node is a place at which the agent makes a decision.

Tip

For readers with a stochastic programming background, "node" is synonymous with "stage" in this section. However, for reasons that will become clear shortly, there can be more than one "node" per instant in time, which is why we prefer the term "node" over "stage."

States, controls, and random variables

The system that we are modeling can be described by three types of variables.

  1. State variables track a property of the system over time.

    Each node has an associated incoming state variable (the value of the state at the start of the node), and an outgoing state variable (the value of the state at the end of the node).

    Examples of state variables include the volume of water in a reservoir, the number of units of inventory in a warehouse, or the spatial position of a moving vehicle.

    Because state variables track the system over time, each node must have the same set of state variables.

    We denote state variables by the letter $x$ for the incoming state variable and $x^\prime$ for the outgoing state variable.

  2. Control variables are actions taken (implicitly or explicitly) by the agent within a node which modify the state variables.

    Examples of control variables include releases of water from the reservoir, sales or purchasing decisions, and acceleration or braking of the vehicle.

    Control variables are local to a node $i$, and they can differ between nodes. For example, some control variables may be available within certain nodes.

    We denote control variables by the letter $u$.

  3. Random variables are finite, discrete, exogenous random variables that the agent observes at the start of a node, before the control variables are decided.

    Examples of random variables include rainfall inflow into a reservoir, probabilistic perishing of inventory, and steering errors in a vehicle.

    Random variables are local to a node $i$, and they can differ between nodes. For example, some nodes may have random variables, and some nodes may not.

    We denote random variables by the Greek letter $\omega$ and the sample space from which they are drawn by $\Omega_i$. The probability of sampling $\omega$ is denoted $p_{\omega}$ for simplicity.

    Importantly, the random variable associated with node $i$ is independent of the random variables in all other nodes.

Dynamics

In a node $i$, the three variables are related by a transition function, which maps the incoming state, the controls, and the random variables to the outgoing state as follows: $x^\prime = T_i(x, u, \omega)$.

As a result of entering a node $i$ with the incoming state $x$, observing random variable $\omega$, and choosing control $u$, the agent incurs a cost $C_i(x, u, \omega)$. (If the agent is a maximizer, this can be a profit, or a negative cost.) We call $C_i$ the stage objective.

To choose their control variables in node $i$, the agent uses a decision rule $u = \pi_i(x, \omega)$, which is a function that maps the incoming state variable and observation of the random variable to a control $u$. This control must satisfy some feasibility requirements $u \in U_i(x, \omega)$.

Here is a schematic which we can use to visualize a single node:

Hazard-decision node

Policy graphs

Now that we have a node, we need to connect multiple nodes together to form a multistage stochastic program. We call the graph created by connecting nodes together a policy graph.

The simplest type of policy graph is a linear policy graph. Here's a linear policy graph with three nodes:

Linear policy graph

Here we have dropped the notations inside each node and replaced them by a label (1, 2, and 3) to represent nodes i=1, i=2, and i=3.

In addition to nodes 1, 2, and 3, there is also a root node (the circle), and three arcs. Each arc has an origin node and a destination node, like 1 => 2, and a corresponding probability of transitioning from the origin to the destination. Unless specified, we assume that the arc probabilities are uniform over the number of outgoing arcs. Thus, in this picture the arc probabilities are all 1.0.

State variables flow long the arcs of the graph. Thus, the outgoing state variable $x^\prime$ from node 1 becomes the incoming state variable $x$ to node 2, and so on.

We denote the set of nodes by $\mathcal{N}$, the root node by $R$, and the probability of transitioning from node $i$ to node $j$ by $p_{ij}$. (If no arc exists, then $p_{ij} = 0$.) We define the set of successors of node $i$ as $i^+ = \{j \in \mathcal{N} | p_{ij} > 0\}$.

Each node in the graph corresponds to a place at which the agent makes a decision, and we call moments in time at which the agent makes a decision stages. By convention, we try to draw policy graphs from left-to-right, with the stages as columns. There can be more than one node in a stage! Here's an example of a structure we call Markovian policy graphs:

Markovian policy graph

Here each column represents a moment in time, the squiggly lines represent stochastic rainfall, and the rows represent the world in two discrete states: El Niño and La Niña. In the El Niño states, the distribution of the rainfall random variable is different to the distribution of the rainfall random variable in the La Niña states, and there is some switching probability between the two states that can be modelled by a Markov chain.

Moreover, policy graphs can have cycles! This allows them to model infinite horizon problems. Here's another example, taken from the paper Dowson (2020):

POWDer policy graph

The columns represent time, and the rows represent different states of the world. In this case, the rows represent different prices that milk can be sold for at the end of each year. The squiggly lines denote a multivariate random variable that models the weekly amount of rainfall that occurs.

Note

The sum of probabilities on the outgoing arcs of node $i$ can be less than 1, i.e., $\sum\limits_{j\in i^+} p_{ij} \le 1$. What does this mean? One interpretation is that the probability is a discount factor. Another interpretation is that there is an implicit "zero" node that we have not modeled, with $p_{i0} = 1 - \sum\limits_{j\in i^+} p_{ij}$. This zero node has $C_0(x, u, \omega) = 0$, and $0^+ = \varnothing$.

More notation

Recall that each node $i$ has a decision rule $u = \pi_i(x, \omega)$, which is a function that maps the incoming state variable and observation of the random variable to a control $u$.

The set of decision rules, with one element for each node in the policy graph, is called a policy.

The goal of the agent is to find a policy that minimizes the expected cost of starting at the root node with some initial condition $x_R$, and proceeding from node to node along the probabilistic arcs until they reach a node with no outgoing arcs (or it reaches an implicit "zero" node).

\[\min_{\pi} \mathbb{E}_{i \in R^+, \omega \in \Omega_i}[V_i^\pi(x_R, \omega)],\]

where

\[V_i^\pi(x, \omega) = C_i(x, u, \omega) + \mathbb{E}_{j \in i^+, \varphi \in \Omega_j}[V_j(x^\prime, \varphi)],\]

where $u = \pi_i(x, \omega) \in U_i(x, \omega)$, and $x^\prime = T_i(x, u, \omega)$.

The expectations are a bit complicated, but they are equivalent to:

\[\mathbb{E}_{j \in i^+, \varphi \in \Omega_j}[V_j(x^\prime, \varphi)] = \sum\limits_{j \in i^+} p_{ij} \sum\limits_{\varphi \in \Omega_j} p_{\varphi}V_j(x^\prime, \varphi).\]

An optimal policy is the set of decision rules that the agent can use to make decisions and achieve the smallest expected cost.

Assumptions

Warning

This section is important!

The space of problems you can model with this framework is very large. Too large, in fact, for us to form tractable solution algorithms for! Stochastic dual dynamic programming requires the following assumptions in order to work:

Assumption 1: finite nodes

There is a finite number of nodes in $\mathcal{N}$.

Assumption 2: finite random variables

The sample space $\Omega_i$ is finite and discrete for each node $i\in\mathcal{N}$.

Assumption 3: convex problems

Given fixed $\omega$, $C_i(x, u, \omega)$ is a convex function, $T_i(x, u, \omega)$ is linear, and $U_i(x, u, \omega)$ is a non-empty, bounded convex set with respect to $x$ and $u$.

Assumption 4: no infinite loops

For all loops in the policy graph, the product of the arc transition probabilities around the loop is strictly less than 1.

Assumption 5: relatively complete recourse

This is a technical but important assumption. See Relatively complete recourse for more details.

Note

SDDP.jl relaxes assumption (3) to allow for integer state and control variables, but we won't go into the details here. Assumption (4) essentially means that we obtain a discounted-cost solution for infinite-horizon problems, instead of an average-cost solution; see Dowson (2020) for details.

Dynamic programming and subproblems

Now that we have formulated our problem, we need some ways of computing optimal decision rules. One way is to just use a heuristic like "choose a control randomly from the set of feasible controls." However, such a policy is unlikely to be optimal.

A better way of obtaining an optimal policy is to use Bellman's principle of optimality, a.k.a Dynamic Programming, and define a recursive subproblem as follows:

\[\begin{aligned} +

An introduction to SDDP.jl

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

SDDP.jl is a solver for multistage stochastic optimization problems. By multistage, we mean problems in which an agent makes a sequence of decisions over time. By stochastic, we mean that the agent is making decisions in the presence of uncertainty that is gradually revealed over the multiple stages.

Tip

Multistage stochastic programming has a lot in common with fields like stochastic optimal control, approximate dynamic programming, Markov decision processes, and reinforcement learning. If it helps, you can think of SDDP as Q-learning in which we approximate the value function using linear programming duality.

This tutorial is in two parts. First, it is an introduction to the background notation and theory we need, and second, it solves a simple multistage stochastic programming problem.

What is a node?

A common feature of multistage stochastic optimization problems is that they model an agent controlling a system over time. To simplify things initially, we're going to start by describing what happens at an instant in time at which the agent makes a decision. Only after this will we extend our problem to multiple stages and the notion of time.

A node is a place at which the agent makes a decision.

Tip

For readers with a stochastic programming background, "node" is synonymous with "stage" in this section. However, for reasons that will become clear shortly, there can be more than one "node" per instant in time, which is why we prefer the term "node" over "stage."

States, controls, and random variables

The system that we are modeling can be described by three types of variables.

  1. State variables track a property of the system over time.

    Each node has an associated incoming state variable (the value of the state at the start of the node), and an outgoing state variable (the value of the state at the end of the node).

    Examples of state variables include the volume of water in a reservoir, the number of units of inventory in a warehouse, or the spatial position of a moving vehicle.

    Because state variables track the system over time, each node must have the same set of state variables.

    We denote state variables by the letter $x$ for the incoming state variable and $x^\prime$ for the outgoing state variable.

  2. Control variables are actions taken (implicitly or explicitly) by the agent within a node which modify the state variables.

    Examples of control variables include releases of water from the reservoir, sales or purchasing decisions, and acceleration or braking of the vehicle.

    Control variables are local to a node $i$, and they can differ between nodes. For example, some control variables may be available within certain nodes.

    We denote control variables by the letter $u$.

  3. Random variables are finite, discrete, exogenous random variables that the agent observes at the start of a node, before the control variables are decided.

    Examples of random variables include rainfall inflow into a reservoir, probabilistic perishing of inventory, and steering errors in a vehicle.

    Random variables are local to a node $i$, and they can differ between nodes. For example, some nodes may have random variables, and some nodes may not.

    We denote random variables by the Greek letter $\omega$ and the sample space from which they are drawn by $\Omega_i$. The probability of sampling $\omega$ is denoted $p_{\omega}$ for simplicity.

    Importantly, the random variable associated with node $i$ is independent of the random variables in all other nodes.

Dynamics

In a node $i$, the three variables are related by a transition function, which maps the incoming state, the controls, and the random variables to the outgoing state as follows: $x^\prime = T_i(x, u, \omega)$.

As a result of entering a node $i$ with the incoming state $x$, observing random variable $\omega$, and choosing control $u$, the agent incurs a cost $C_i(x, u, \omega)$. (If the agent is a maximizer, this can be a profit, or a negative cost.) We call $C_i$ the stage objective.

To choose their control variables in node $i$, the agent uses a decision rule $u = \pi_i(x, \omega)$, which is a function that maps the incoming state variable and observation of the random variable to a control $u$. This control must satisfy some feasibility requirements $u \in U_i(x, \omega)$.

Here is a schematic which we can use to visualize a single node:

Hazard-decision node

Policy graphs

Now that we have a node, we need to connect multiple nodes together to form a multistage stochastic program. We call the graph created by connecting nodes together a policy graph.

The simplest type of policy graph is a linear policy graph. Here's a linear policy graph with three nodes:

Linear policy graph

Here we have dropped the notations inside each node and replaced them by a label (1, 2, and 3) to represent nodes i=1, i=2, and i=3.

In addition to nodes 1, 2, and 3, there is also a root node (the circle), and three arcs. Each arc has an origin node and a destination node, like 1 => 2, and a corresponding probability of transitioning from the origin to the destination. Unless specified, we assume that the arc probabilities are uniform over the number of outgoing arcs. Thus, in this picture the arc probabilities are all 1.0.

State variables flow long the arcs of the graph. Thus, the outgoing state variable $x^\prime$ from node 1 becomes the incoming state variable $x$ to node 2, and so on.

We denote the set of nodes by $\mathcal{N}$, the root node by $R$, and the probability of transitioning from node $i$ to node $j$ by $p_{ij}$. (If no arc exists, then $p_{ij} = 0$.) We define the set of successors of node $i$ as $i^+ = \{j \in \mathcal{N} | p_{ij} > 0\}$.

Each node in the graph corresponds to a place at which the agent makes a decision, and we call moments in time at which the agent makes a decision stages. By convention, we try to draw policy graphs from left-to-right, with the stages as columns. There can be more than one node in a stage! Here's an example of a structure we call Markovian policy graphs:

Markovian policy graph

Here each column represents a moment in time, the squiggly lines represent stochastic rainfall, and the rows represent the world in two discrete states: El Niño and La Niña. In the El Niño states, the distribution of the rainfall random variable is different to the distribution of the rainfall random variable in the La Niña states, and there is some switching probability between the two states that can be modelled by a Markov chain.

Moreover, policy graphs can have cycles! This allows them to model infinite horizon problems. Here's another example, taken from the paper Dowson (2020):

POWDer policy graph

The columns represent time, and the rows represent different states of the world. In this case, the rows represent different prices that milk can be sold for at the end of each year. The squiggly lines denote a multivariate random variable that models the weekly amount of rainfall that occurs.

Note

The sum of probabilities on the outgoing arcs of node $i$ can be less than 1, i.e., $\sum\limits_{j\in i^+} p_{ij} \le 1$. What does this mean? One interpretation is that the probability is a discount factor. Another interpretation is that there is an implicit "zero" node that we have not modeled, with $p_{i0} = 1 - \sum\limits_{j\in i^+} p_{ij}$. This zero node has $C_0(x, u, \omega) = 0$, and $0^+ = \varnothing$.

More notation

Recall that each node $i$ has a decision rule $u = \pi_i(x, \omega)$, which is a function that maps the incoming state variable and observation of the random variable to a control $u$.

The set of decision rules, with one element for each node in the policy graph, is called a policy.

The goal of the agent is to find a policy that minimizes the expected cost of starting at the root node with some initial condition $x_R$, and proceeding from node to node along the probabilistic arcs until they reach a node with no outgoing arcs (or it reaches an implicit "zero" node).

\[\min_{\pi} \mathbb{E}_{i \in R^+, \omega \in \Omega_i}[V_i^\pi(x_R, \omega)],\]

where

\[V_i^\pi(x, \omega) = C_i(x, u, \omega) + \mathbb{E}_{j \in i^+, \varphi \in \Omega_j}[V_j(x^\prime, \varphi)],\]

where $u = \pi_i(x, \omega) \in U_i(x, \omega)$, and $x^\prime = T_i(x, u, \omega)$.

The expectations are a bit complicated, but they are equivalent to:

\[\mathbb{E}_{j \in i^+, \varphi \in \Omega_j}[V_j(x^\prime, \varphi)] = \sum\limits_{j \in i^+} p_{ij} \sum\limits_{\varphi \in \Omega_j} p_{\varphi}V_j(x^\prime, \varphi).\]

An optimal policy is the set of decision rules that the agent can use to make decisions and achieve the smallest expected cost.

Assumptions

Warning

This section is important!

The space of problems you can model with this framework is very large. Too large, in fact, for us to form tractable solution algorithms for! Stochastic dual dynamic programming requires the following assumptions in order to work:

Assumption 1: finite nodes

There is a finite number of nodes in $\mathcal{N}$.

Assumption 2: finite random variables

The sample space $\Omega_i$ is finite and discrete for each node $i\in\mathcal{N}$.

Assumption 3: convex problems

Given fixed $\omega$, $C_i(x, u, \omega)$ is a convex function, $T_i(x, u, \omega)$ is linear, and $U_i(x, u, \omega)$ is a non-empty, bounded convex set with respect to $x$ and $u$.

Assumption 4: no infinite loops

For all loops in the policy graph, the product of the arc transition probabilities around the loop is strictly less than 1.

Assumption 5: relatively complete recourse

This is a technical but important assumption. See Relatively complete recourse for more details.

Note

SDDP.jl relaxes assumption (3) to allow for integer state and control variables, but we won't go into the details here. Assumption (4) essentially means that we obtain a discounted-cost solution for infinite-horizon problems, instead of an average-cost solution; see Dowson (2020) for details.

Dynamic programming and subproblems

Now that we have formulated our problem, we need some ways of computing optimal decision rules. One way is to just use a heuristic like "choose a control randomly from the set of feasible controls." However, such a policy is unlikely to be optimal.

A better way of obtaining an optimal policy is to use Bellman's principle of optimality, a.k.a Dynamic Programming, and define a recursive subproblem as follows:

\[\begin{aligned} V_i(x, \omega) = \min\limits_{\bar{x}, x^\prime, u} \;\; & C_i(\bar{x}, u, \omega) + \mathbb{E}_{j \in i^+, \varphi \in \Omega_j}[V_j(x^\prime, \varphi)]\\ & x^\prime = T_i(\bar{x}, u, \omega) \\ & u \in U_i(\bar{x}, \omega) \\ @@ -228,14 +228,14 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 1.250000e+04 2.500000e+03 3.855944e-03 12 1 - 10 7.500000e+03 8.333333e+03 8.556104e-02 120 1 + 1 3.750000e+04 2.500000e+03 3.497124e-03 12 1 + 10 1.250000e+04 8.333333e+03 1.358390e-02 120 1 ------------------------------------------------------------------- status : iteration_limit -total time (s) : 8.556104e-02 +total time (s) : 1.358390e-02 total solves : 120 best bound : 8.333333e+03 -simulation ci : 9.125000e+03 ± 2.478419e+03 +simulation ci : 1.187500e+04 ± 6.125000e+03 numeric issues : 0 -------------------------------------------------------------------

There's a lot going on in this printout! Let's break it down.

The first section, "problem," gives some problem statistics. In this example there are 3 nodes, 1 state variable, and 27 scenarios ($3^3$). We haven't solved this problem before so there are no existing cuts.

The "options" section lists some options we are using to solve the problem. For more information on the numerical stability report, read the Numerical stability report section.

The "subproblem structure" section also needs explaining. This looks at all of the nodes in the policy graph and reports the minimum and maximum number of variables and each constraint type in the corresponding subproblem. In this case each subproblem has 7 variables and various numbers of different constraint types. Note that the exact numbers may not correspond to the formulation as you wrote it, because SDDP.jl adds some extra variables for the cost-to-go function.

Then comes the iteration log, which is the main part of the printout. It has the following columns:

  • iteration: the SDDP iteration
  • simulation: the cost of the single forward pass simulation for that iteration. This value is stochastic and is not guaranteed to improve over time. However, it's useful to check that the units are reasonable, and that it is not deterministic if you intended for the problem to be stochastic, etc.
  • bound: this is a lower bound (upper if maximizing) for the value of the optimal policy. This bound should be monotonically improving (increasing if minimizing, decreasing if maximizing), but in some cases it can temporarily worsen due to cut selection, especially in the early iterations of the algorithm.
  • time (s): the total number of seconds spent solving so far
  • solves: the total number of subproblem solves to date. This can be very large!
  • pid: the ID of the processor used to solve that iteration. This should be 1 unless you are using parallel computation.

In addition, if the first character of a line is , then SDDP.jl experienced numerical issues during the solve, but successfully recovered.

The printout finishes with some summary statistics:

  • status: why did the solver stop?
  • total time (s), best bound, and total solves are the values from the last iteration of the solve.
  • simulation ci: a confidence interval that estimates the quality of the policy from the Simulation column.
  • numeric issues: the number of iterations that experienced numerical issues.
Warning

The simulation ci result can be misleading if you run a small number of iterations, or if the initial simulations are very bad. On a more technical note, it is an in-sample simulation, which may not reflect the true performance of the policy. See Obtaining bounds for more details.

Obtaining the decision rule

After training a policy, we can create a decision rule using SDDP.DecisionRule:

rule = SDDP.DecisionRule(model; node = 1)
A decision rule for node 1

Then, to evaluate the decision rule, we use SDDP.evaluate:

solution = SDDP.evaluate(
     rule;
@@ -257,28 +257,28 @@
   :volume             => State{Float64}(200.0, 100.0)
   :hydro_spill        => 0.0
   :bellman_term       => 2500.0
-  :noise_term         => 0.0
+  :noise_term         => 50.0
   :node_index         => 2
-  :stage_objective    => 5000.0
+  :stage_objective    => 0.0
   :objective_state    => nothing
-  :thermal_generation => 50.0
-  :hydro_generation   => 100.0
+  :thermal_generation => 0.0
+  :hydro_generation   => 150.0
   :belief             => Dict(2=>1.0)

Ignore many of the entries for now; they will be relevant later.

One element of interest is :volume.

outgoing_volume = map(simulations[1]) do node
     return node[:volume].out
 end
3-element Vector{Float64}:
  200.0
  100.0
-   0.0

Another is :thermal_generation.

thermal_generation = map(simulations[1]) do node
+  -0.0

Another is :thermal_generation.

thermal_generation = map(simulations[1]) do node
     return node[:thermal_generation]
 end
3-element Vector{Float64}:
  150.0
-  50.0
-  50.0

Obtaining bounds

Because the optimal policy is stochastic, one common approach to quantify the quality of the policy is to construct a confidence interval for the expected cost by summing the stage objectives along each simulation.

objectives = map(simulations) do simulation
+   0.0
+   0.0

Obtaining bounds

Because the optimal policy is stochastic, one common approach to quantify the quality of the policy is to construct a confidence interval for the expected cost by summing the stage objectives along each simulation.

objectives = map(simulations) do simulation
     return sum(stage[:stage_objective] for stage in simulation)
 end
 
 μ, ci = SDDP.confidence_interval(objectives)
-println("Confidence interval: ", μ, " ± ", ci)
Confidence interval: 8925.0 ± 978.1545755847455

This confidence interval is an estimate for an upper bound of the policy's quality. We can calculate the lower bound using SDDP.calculate_bound.

println("Lower bound: ", SDDP.calculate_bound(model))
Lower bound: 8333.333333333332
Tip

The upper- and lower-bounds are reversed if maximizing, i.e., SDDP.calculate_bound. returns an upper bound.

Custom recorders

In addition to simulating the primal values of variables, we can also pass custom recorder functions. Each of these functions takes one argument, the JuMP subproblem corresponding to each node. This function gets called after we have solved each node as we traverse the policy graph in the simulation.

For example, the dual of the demand constraint (which we named demand_constraint) corresponds to the price we should charge for electricity, since it represents the cost of each additional unit of demand. To calculate this, we can go:

simulations = SDDP.simulate(
+println("Confidence interval: ", μ, " ± ", ci)
Confidence interval: 8250.0 ± 917.3672254133708

This confidence interval is an estimate for an upper bound of the policy's quality. We can calculate the lower bound using SDDP.calculate_bound.

println("Lower bound: ", SDDP.calculate_bound(model))
Lower bound: 8333.333333333332
Tip

The upper- and lower-bounds are reversed if maximizing, i.e., SDDP.calculate_bound. returns an upper bound.

Custom recorders

In addition to simulating the primal values of variables, we can also pass custom recorder functions. Each of these functions takes one argument, the JuMP subproblem corresponding to each node. This function gets called after we have solved each node as we traverse the policy graph in the simulation.

For example, the dual of the demand constraint (which we named demand_constraint) corresponds to the price we should charge for electricity, since it represents the cost of each additional unit of demand. To calculate this, we can go:

simulations = SDDP.simulate(
     model,
     1;  ## Perform a single simulation
     custom_recorders = Dict{Symbol,Function}(
@@ -291,4 +291,4 @@
 end
3-element Vector{Float64}:
   50.0
  100.0
- 150.0

Extracting the marginal water values

Finally, we can use SDDP.ValueFunction and SDDP.evaluate to obtain and evaluate the value function at different points in the state-space.

Note

By "value function" we mean $\mathbb{E}_{j \in i^+, \varphi \in \Omega_j}[V_j(x^\prime, \varphi)]$, not the function $V_i(x, \omega)$.

First, we construct a value function from the first subproblem:

V = SDDP.ValueFunction(model; node = 1)
A value function for node 1

Then we can evaluate V at a point:

cost, price = SDDP.evaluate(V, Dict("volume" => 10))
(21499.999999999996, Dict(:volume => -99.99999999999999))

This returns the cost-to-go (cost), and the gradient of the cost-to-go function with respect to each state variable. Note that since we are minimizing, the price has a negative sign: each additional unit of water leads to a decrease in the expected long-run cost.

+ -0.0

Extracting the marginal water values

Finally, we can use SDDP.ValueFunction and SDDP.evaluate to obtain and evaluate the value function at different points in the state-space.

Note

By "value function" we mean $\mathbb{E}_{j \in i^+, \varphi \in \Omega_j}[V_j(x^\prime, \varphi)]$, not the function $V_i(x, \omega)$.

First, we construct a value function from the first subproblem:

V = SDDP.ValueFunction(model; node = 1)
A value function for node 1

Then we can evaluate V at a point:

cost, price = SDDP.evaluate(V, Dict("volume" => 10))
(21499.999999999996, Dict(:volume => -99.99999999999999))

This returns the cost-to-go (cost), and the gradient of the cost-to-go function with respect to each state variable. Note that since we are minimizing, the price has a negative sign: each additional unit of water leads to a decrease in the expected long-run cost.

diff --git a/previews/PR797/tutorial/inventory.ipynb b/previews/PR797/tutorial/inventory.ipynb new file mode 100644 index 000000000..1c9b675cf --- /dev/null +++ b/previews/PR797/tutorial/inventory.ipynb @@ -0,0 +1,375 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "source": [ + "# Example: inventory management" + ], + "metadata": {} + }, + { + "cell_type": "markdown", + "source": [ + "The purpose of this tutorial is to demonstrate a well-known inventory\n", + "management problem with a finite- and infinite-horizon policy." + ], + "metadata": {} + }, + { + "cell_type": "markdown", + "source": [ + "## Required packages" + ], + "metadata": {} + }, + { + "cell_type": "markdown", + "source": [ + "This tutorial requires the following packages:" + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "using SDDP\n", + "import Distributions\n", + "import HiGHS\n", + "import Plots\n", + "import Statistics" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "## Background" + ], + "metadata": {} + }, + { + "cell_type": "markdown", + "source": [ + "Consider a periodic review inventory problem involving a single product. The\n", + "initial inventory is denoted by $x_0 \\geq 0$, and a decision-maker can place\n", + "an order at the start of each stage. The objective is to minimize expected\n", + "costs over the planning horizon. The following parameters define the cost\n", + "structure:\n", + "\n", + " * `c` is the unit cost for purchasing each unit\n", + " * `h` is the holding cost per unit remaining at the end of each stage\n", + " * `p` is the shortage cost per unit of unsatisfied demand at the end of each\n", + " stage\n", + "\n", + "There are no fixed ordering costs, and the demand at each stage is assumed to\n", + "follow an independent and identically distributed random variable with\n", + "cumulative distribution function (CDF) $\\Phi(\\cdot)$. Any unsatisfied demand\n", + "is backlogged and carried forward to the next stage." + ], + "metadata": {} + }, + { + "cell_type": "markdown", + "source": [ + "At each stage, an agent must decide how many items to order. The per-stage\n", + "costs are the sum of the order costs, shortage and holding costs incurred at\n", + "the end of the stage, after demand is realized." + ], + "metadata": {} + }, + { + "cell_type": "markdown", + "source": [ + "Following Chapter 19 of Introduction to Operations Research by Hillier and\n", + "Lieberman (7th edition), we use the following parameters: $c=15, h=1, p=15$." + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "x_0 = 10 # initial inventory\n", + "c = 35 # unit inventory cost\n", + "h = 1 # unit inventory holding cost\n", + "p = 15 # unit order cost" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "Demand follows a continuous uniform distribution between 0 and 800. We\n", + "construct a sample average approximation with 20 scenarios:" + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "Ω = range(0, 800; length = 20);" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "This is a well-known inventory problem with a closed-form solution. The\n", + "optimal policy is a simple order-up-to policy: if the inventory level is\n", + "below a certain number of units, the decision-maker orders up to that number\n", + "of units. Otherwise, no order is placed. For a detailed analysis, refer\n", + "to Foundations of Stochastic Inventory Theory by Evan Porteus (Stanford\n", + "Business Books, 2002)." + ], + "metadata": {} + }, + { + "cell_type": "markdown", + "source": [ + "## Finite horizon" + ], + "metadata": {} + }, + { + "cell_type": "markdown", + "source": [ + "For a finite horizon of length $T$, the problem is to minimize the total\n", + "expected cost over all stages." + ], + "metadata": {} + }, + { + "cell_type": "markdown", + "source": [ + "In the last stage, the decision-maker can recover the unit cost `c` for each\n", + "leftover item, or buy out any remaining backlog, also at the unit cost `c`." + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "T = 10 # number of stages\n", + "model = SDDP.LinearPolicyGraph(;\n", + " stages = T + 1,\n", + " sense = :Min,\n", + " lower_bound = 0.0,\n", + " optimizer = HiGHS.Optimizer,\n", + ") do sp, t\n", + " @variable(sp, x_inventory >= 0, SDDP.State, initial_value = x_0)\n", + " @variable(sp, x_demand >= 0, SDDP.State, initial_value = 0)\n", + " # u_buy is a Decision-Hazard control variable. We decide u.out for use in\n", + " # the next stage\n", + " @variable(sp, u_buy >= 0, SDDP.State, initial_value = 0)\n", + " @variable(sp, u_sell >= 0)\n", + " @variable(sp, w_demand == 0)\n", + " @constraint(sp, x_inventory.out == x_inventory.in + u_buy.in - u_sell)\n", + " @constraint(sp, x_demand.out == x_demand.in + w_demand - u_sell)\n", + " if t == 1\n", + " fix(u_sell, 0; force = true)\n", + " @stageobjective(sp, c * u_buy.out)\n", + " elseif t == T + 1\n", + " fix(u_buy.out, 0; force = true)\n", + " @stageobjective(sp, -c * x_inventory.out + c * x_demand.out)\n", + " SDDP.parameterize(ω -> JuMP.fix(w_demand, ω), sp, Ω)\n", + " else\n", + " @stageobjective(sp, c * u_buy.out + h * x_inventory.out + p * x_demand.out)\n", + " SDDP.parameterize(ω -> JuMP.fix(w_demand, ω), sp, Ω)\n", + " end\n", + " return\n", + "end" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "Train and simulate the policy:" + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "SDDP.train(model)\n", + "simulations = SDDP.simulate(model, 200, [:x_inventory, :u_buy])\n", + "objective_values = [sum(t[:stage_objective] for t in s) for s in simulations]\n", + "μ, ci = round.(SDDP.confidence_interval(objective_values, 1.96); digits = 2)\n", + "lower_bound = round(SDDP.calculate_bound(model); digits = 2)\n", + "println(\"Confidence interval: \", μ, \" ± \", ci)\n", + "println(\"Lower bound: \", lower_bound)" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "Plot the optimal inventory levels:" + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "plt = SDDP.publication_plot(\n", + " simulations;\n", + " title = \"x_inventory.out + u_buy.out\",\n", + " xlabel = \"Stage\",\n", + " ylabel = \"Quantity\",\n", + " ylims = (0, 1_000),\n", + ") do data\n", + " return data[:x_inventory].out + data[:u_buy].out\n", + "end" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "In the early stages, we indeed recover an order-up-to policy. However,\n", + "there are end-of-horizon effects as the agent tries to optimize their\n", + "decision making knowing that they have 10 realizations of demand." + ], + "metadata": {} + }, + { + "cell_type": "markdown", + "source": [ + "## Infinite horizon" + ], + "metadata": {} + }, + { + "cell_type": "markdown", + "source": [ + "We can remove the end-of-horizonn effects by considering an infinite\n", + "horizon model. We assume a discount factor $\\alpha=0.95$:" + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "α = 0.95\n", + "graph = SDDP.LinearGraph(2)\n", + "SDDP.add_edge(graph, 2 => 2, α)\n", + "graph" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "The objective in this case is to minimize the discounted expected costs over\n", + "an infinite planning horizon." + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "model = SDDP.PolicyGraph(\n", + " graph;\n", + " sense = :Min,\n", + " lower_bound = 0.0,\n", + " optimizer = HiGHS.Optimizer,\n", + ") do sp, t\n", + " @variable(sp, x_inventory >= 0, SDDP.State, initial_value = x_0)\n", + " @variable(sp, x_demand >= 0, SDDP.State, initial_value = 0)\n", + " # u_buy is a Decision-Hazard control variable. We decide u.out for use in\n", + " # the next stage\n", + " @variable(sp, u_buy >= 0, SDDP.State, initial_value = 0)\n", + " @variable(sp, u_sell >= 0)\n", + " @variable(sp, w_demand == 0)\n", + " @constraint(sp, x_inventory.out == x_inventory.in + u_buy.in - u_sell)\n", + " @constraint(sp, x_demand.out == x_demand.in + w_demand - u_sell)\n", + " if t == 1\n", + " fix(u_sell, 0; force = true)\n", + " @stageobjective(sp, c * u_buy.out)\n", + " else\n", + " @stageobjective(sp, c * u_buy.out + h * x_inventory.out + p * x_demand.out)\n", + " SDDP.parameterize(ω -> JuMP.fix(w_demand, ω), sp, Ω)\n", + " end\n", + " return\n", + "end\n", + "\n", + "SDDP.train(model; iteration_limit = 400)\n", + "simulations = SDDP.simulate(\n", + " model,\n", + " 200,\n", + " [:x_inventory, :u_buy];\n", + " sampling_scheme = SDDP.InSampleMonteCarlo(;\n", + " max_depth = 50,\n", + " terminate_on_dummy_leaf = false,\n", + " ),\n", + ");" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "Plot the optimal inventory levels:" + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "plt = SDDP.publication_plot(\n", + " simulations;\n", + " title = \"x_inventory.out + u_buy.out\",\n", + " xlabel = \"Stage\",\n", + " ylabel = \"Quantity\",\n", + " ylims = (0, 1_000),\n", + ") do data\n", + " return data[:x_inventory].out + data[:u_buy].out\n", + "end\n", + "Plots.hline!(plt, [662]; label = \"Analytic solution\")" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "We again recover an order-up-to policy. The analytic solution is to\n", + "order-up-to 662 units. We do not precisely recover this solution because\n", + "we used a sample average approximation of 20 elements. If we increased the\n", + "number of samples, our solution would approach the analytic solution." + ], + "metadata": {} + } + ], + "nbformat_minor": 3, + "metadata": { + "language_info": { + "file_extension": ".jl", + "mimetype": "application/julia", + "name": "julia", + "version": "1.11.1" + }, + "kernelspec": { + "name": "julia-1.11", + "display_name": "Julia 1.11.1", + "language": "julia" + } + }, + "nbformat": 4 +} diff --git a/previews/PR797/tutorial/inventory.jl b/previews/PR797/tutorial/inventory.jl new file mode 100644 index 000000000..41e79ddd2 --- /dev/null +++ b/previews/PR797/tutorial/inventory.jl @@ -0,0 +1,192 @@ +# Copyright (c) 2017-24, Oscar Dowson and SDDP.jl contributors. #src +# This Source Code Form is subject to the terms of the Mozilla Public #src +# License, v. 2.0. If a copy of the MPL was not distributed with this #src +# file, You can obtain one at http://mozilla.org/MPL/2.0/. #src + +# # Example: inventory management + +# The purpose of this tutorial is to demonstrate a well-known inventory +# management problem with a finite- and infinite-horizon policy. + +# ## Required packages + +# This tutorial requires the following packages: + +using SDDP +import Distributions +import HiGHS +import Plots +import Statistics + +# ## Background + +# Consider a periodic review inventory problem involving a single product. The +# initial inventory is denoted by $x_0 \geq 0$, and a decision-maker can place +# an order at the start of each stage. The objective is to minimize expected +# costs over the planning horizon. The following parameters define the cost +# structure: +# +# * `c` is the unit cost for purchasing each unit +# * `h` is the holding cost per unit remaining at the end of each stage +# * `p` is the shortage cost per unit of unsatisfied demand at the end of each +# stage +# +# There are no fixed ordering costs, and the demand at each stage is assumed to +# follow an independent and identically distributed random variable with +# cumulative distribution function (CDF) $\Phi(\cdot)$. Any unsatisfied demand +# is backlogged and carried forward to the next stage. + +# At each stage, an agent must decide how many items to order. The per-stage +# costs are the sum of the order costs, shortage and holding costs incurred at +# the end of the stage, after demand is realized. + +# Following Chapter 19 of Introduction to Operations Research by Hillier and +# Lieberman (7th edition), we use the following parameters: $c=15, h=1, p=15$. + +x_0 = 10 # initial inventory +c = 35 # unit inventory cost +h = 1 # unit inventory holding cost +p = 15 # unit order cost + +# Demand follows a continuous uniform distribution between 0 and 800. We +# construct a sample average approximation with 20 scenarios: + +Ω = range(0, 800; length = 20); + +# This is a well-known inventory problem with a closed-form solution. The +# optimal policy is a simple order-up-to policy: if the inventory level is +# below a certain number of units, the decision-maker orders up to that number +# of units. Otherwise, no order is placed. For a detailed analysis, refer +# to Foundations of Stochastic Inventory Theory by Evan Porteus (Stanford +# Business Books, 2002). + +# ## Finite horizon + +# For a finite horizon of length $T$, the problem is to minimize the total +# expected cost over all stages. + +# In the last stage, the decision-maker can recover the unit cost `c` for each +# leftover item, or buy out any remaining backlog, also at the unit cost `c`. + +T = 10 # number of stages +model = SDDP.LinearPolicyGraph(; + stages = T + 1, + sense = :Min, + lower_bound = 0.0, + optimizer = HiGHS.Optimizer, +) do sp, t + @variable(sp, x_inventory >= 0, SDDP.State, initial_value = x_0) + @variable(sp, x_demand >= 0, SDDP.State, initial_value = 0) + ## u_buy is a Decision-Hazard control variable. We decide u.out for use in + ## the next stage + @variable(sp, u_buy >= 0, SDDP.State, initial_value = 0) + @variable(sp, u_sell >= 0) + @variable(sp, w_demand == 0) + @constraint(sp, x_inventory.out == x_inventory.in + u_buy.in - u_sell) + @constraint(sp, x_demand.out == x_demand.in + w_demand - u_sell) + if t == 1 + fix(u_sell, 0; force = true) + @stageobjective(sp, c * u_buy.out) + elseif t == T + 1 + fix(u_buy.out, 0; force = true) + @stageobjective(sp, -c * x_inventory.out + c * x_demand.out) + SDDP.parameterize(ω -> JuMP.fix(w_demand, ω), sp, Ω) + else + @stageobjective(sp, c * u_buy.out + h * x_inventory.out + p * x_demand.out) + SDDP.parameterize(ω -> JuMP.fix(w_demand, ω), sp, Ω) + end + return +end + +# Train and simulate the policy: + +SDDP.train(model) +simulations = SDDP.simulate(model, 200, [:x_inventory, :u_buy]) +objective_values = [sum(t[:stage_objective] for t in s) for s in simulations] +μ, ci = round.(SDDP.confidence_interval(objective_values, 1.96); digits = 2) +lower_bound = round(SDDP.calculate_bound(model); digits = 2) +println("Confidence interval: ", μ, " ± ", ci) +println("Lower bound: ", lower_bound) + +# Plot the optimal inventory levels: + +plt = SDDP.publication_plot( + simulations; + title = "x_inventory.out + u_buy.out", + xlabel = "Stage", + ylabel = "Quantity", + ylims = (0, 1_000), +) do data + return data[:x_inventory].out + data[:u_buy].out +end + +# In the early stages, we indeed recover an order-up-to policy. However, +# there are end-of-horizon effects as the agent tries to optimize their +# decision making knowing that they have 10 realizations of demand. + +# ## Infinite horizon + +# We can remove the end-of-horizonn effects by considering an infinite +# horizon model. We assume a discount factor $\alpha=0.95$: + +α = 0.95 +graph = SDDP.LinearGraph(2) +SDDP.add_edge(graph, 2 => 2, α) +graph + +# The objective in this case is to minimize the discounted expected costs over +# an infinite planning horizon. + +model = SDDP.PolicyGraph( + graph; + sense = :Min, + lower_bound = 0.0, + optimizer = HiGHS.Optimizer, +) do sp, t + @variable(sp, x_inventory >= 0, SDDP.State, initial_value = x_0) + @variable(sp, x_demand >= 0, SDDP.State, initial_value = 0) + ## u_buy is a Decision-Hazard control variable. We decide u.out for use in + ## the next stage + @variable(sp, u_buy >= 0, SDDP.State, initial_value = 0) + @variable(sp, u_sell >= 0) + @variable(sp, w_demand == 0) + @constraint(sp, x_inventory.out == x_inventory.in + u_buy.in - u_sell) + @constraint(sp, x_demand.out == x_demand.in + w_demand - u_sell) + if t == 1 + fix(u_sell, 0; force = true) + @stageobjective(sp, c * u_buy.out) + else + @stageobjective(sp, c * u_buy.out + h * x_inventory.out + p * x_demand.out) + SDDP.parameterize(ω -> JuMP.fix(w_demand, ω), sp, Ω) + end + return +end + +SDDP.train(model; iteration_limit = 400) +simulations = SDDP.simulate( + model, + 200, + [:x_inventory, :u_buy]; + sampling_scheme = SDDP.InSampleMonteCarlo(; + max_depth = 50, + terminate_on_dummy_leaf = false, + ), +); + +# Plot the optimal inventory levels: + +plt = SDDP.publication_plot( + simulations; + title = "x_inventory.out + u_buy.out", + xlabel = "Stage", + ylabel = "Quantity", + ylims = (0, 1_000), +) do data + return data[:x_inventory].out + data[:u_buy].out +end +Plots.hline!(plt, [662]; label = "Analytic solution") + +# We again recover an order-up-to policy. The analytic solution is to +# order-up-to 662 units. We do not precisely recover this solution because +# we used a sample average approximation of 20 elements. If we increased the +# number of samples, our solution would approach the analytic solution. diff --git a/previews/PR797/tutorial/inventory/0a6e9b84.svg b/previews/PR797/tutorial/inventory/0a6e9b84.svg new file mode 100644 index 000000000..bf16dda73 --- /dev/null +++ b/previews/PR797/tutorial/inventory/0a6e9b84.svg @@ -0,0 +1,57 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/inventory/478eb094.svg b/previews/PR797/tutorial/inventory/478eb094.svg new file mode 100644 index 000000000..006eb5a0b --- /dev/null +++ b/previews/PR797/tutorial/inventory/478eb094.svg @@ -0,0 +1,51 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/inventory/index.html b/previews/PR797/tutorial/inventory/index.html new file mode 100644 index 000000000..9fb533162 --- /dev/null +++ b/previews/PR797/tutorial/inventory/index.html @@ -0,0 +1,203 @@ + +Example: inventory management · SDDP.jl

Example: inventory management

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

The purpose of this tutorial is to demonstrate a well-known inventory management problem with a finite- and infinite-horizon policy.

Required packages

This tutorial requires the following packages:

using SDDP
+import Distributions
+import HiGHS
+import Plots
+import Statistics

Background

Consider a periodic review inventory problem involving a single product. The initial inventory is denoted by $x_0 \geq 0$, and a decision-maker can place an order at the start of each stage. The objective is to minimize expected costs over the planning horizon. The following parameters define the cost structure:

  • c is the unit cost for purchasing each unit
  • h is the holding cost per unit remaining at the end of each stage
  • p is the shortage cost per unit of unsatisfied demand at the end of each stage

There are no fixed ordering costs, and the demand at each stage is assumed to follow an independent and identically distributed random variable with cumulative distribution function (CDF) $\Phi(\cdot)$. Any unsatisfied demand is backlogged and carried forward to the next stage.

At each stage, an agent must decide how many items to order. The per-stage costs are the sum of the order costs, shortage and holding costs incurred at the end of the stage, after demand is realized.

Following Chapter 19 of Introduction to Operations Research by Hillier and Lieberman (7th edition), we use the following parameters: $c=15, h=1, p=15$.

x_0 = 10        # initial inventory
+c = 35          # unit inventory cost
+h = 1           # unit inventory holding cost
+p = 15          # unit order cost
15

Demand follows a continuous uniform distribution between 0 and 800. We construct a sample average approximation with 20 scenarios:

Ω = range(0, 800; length = 20);

This is a well-known inventory problem with a closed-form solution. The optimal policy is a simple order-up-to policy: if the inventory level is below a certain number of units, the decision-maker orders up to that number of units. Otherwise, no order is placed. For a detailed analysis, refer to Foundations of Stochastic Inventory Theory by Evan Porteus (Stanford Business Books, 2002).

Finite horizon

For a finite horizon of length $T$, the problem is to minimize the total expected cost over all stages.

In the last stage, the decision-maker can recover the unit cost c for each leftover item, or buy out any remaining backlog, also at the unit cost c.

T = 10 # number of stages
+model = SDDP.LinearPolicyGraph(;
+    stages = T + 1,
+    sense = :Min,
+    lower_bound = 0.0,
+    optimizer = HiGHS.Optimizer,
+) do sp, t
+    @variable(sp, x_inventory >= 0, SDDP.State, initial_value = x_0)
+    @variable(sp, x_demand >= 0, SDDP.State, initial_value = 0)
+    # u_buy is a Decision-Hazard control variable. We decide u.out for use in
+    # the next stage
+    @variable(sp, u_buy >= 0, SDDP.State, initial_value = 0)
+    @variable(sp, u_sell >= 0)
+    @variable(sp, w_demand == 0)
+    @constraint(sp, x_inventory.out == x_inventory.in + u_buy.in - u_sell)
+    @constraint(sp, x_demand.out == x_demand.in + w_demand - u_sell)
+    if t == 1
+        fix(u_sell, 0; force = true)
+        @stageobjective(sp, c * u_buy.out)
+    elseif t == T + 1
+        fix(u_buy.out, 0; force = true)
+        @stageobjective(sp, -c * x_inventory.out + c * x_demand.out)
+        SDDP.parameterize(ω -> JuMP.fix(w_demand, ω), sp, Ω)
+    else
+        @stageobjective(sp, c * u_buy.out + h * x_inventory.out + p * x_demand.out)
+        SDDP.parameterize(ω -> JuMP.fix(w_demand, ω), sp, Ω)
+    end
+    return
+end
A policy graph with 11 nodes.
+ Node indices: 1, ..., 11
+

Train and simulate the policy:

SDDP.train(model)
+simulations = SDDP.simulate(model, 200, [:x_inventory, :u_buy])
+objective_values = [sum(t[:stage_objective] for t in s) for s in simulations]
+μ, ci = round.(SDDP.confidence_interval(objective_values, 1.96); digits = 2)
+lower_bound = round(SDDP.calculate_bound(model); digits = 2)
+println("Confidence interval: ", μ, " ± ", ci)
+println("Lower bound: ", lower_bound)
-------------------------------------------------------------------
+         SDDP.jl (c) Oscar Dowson and contributors, 2017-24
+-------------------------------------------------------------------
+problem
+  nodes           : 11
+  state variables : 3
+  scenarios       : 1.02400e+13
+  existing cuts   : false
+options
+  solver          : serial mode
+  risk measure    : SDDP.Expectation()
+  sampling scheme : SDDP.InSampleMonteCarlo
+subproblem structure
+  VariableRef                             : [9, 9]
+  AffExpr in MOI.EqualTo{Float64}         : [2, 2]
+  VariableRef in MOI.EqualTo{Float64}     : [1, 2]
+  VariableRef in MOI.GreaterThan{Float64} : [4, 5]
+  VariableRef in MOI.LessThan{Float64}    : [1, 1]
+numerical stability report
+  matrix range     [1e+00, 1e+00]
+  objective range  [1e+00, 4e+01]
+  bounds range     [0e+00, 0e+00]
+  rhs range        [0e+00, 0e+00]
+-------------------------------------------------------------------
+ iteration    simulation      bound        time (s)     solves  pid
+-------------------------------------------------------------------
+         1   3.886158e+05  4.573582e+04  1.881695e-02       212   1
+        55   1.440289e+05  1.443366e+05  1.024649e+00     14960   1
+       110   1.435658e+05  1.443373e+05  2.026297e+00     28820   1
+       166   1.592711e+05  1.443373e+05  3.031513e+00     40692   1
+       219   1.226816e+05  1.443373e+05  4.047272e+00     53028   1
+       268   1.446184e+05  1.443373e+05  5.052959e+00     63416   1
+       286   1.260500e+05  1.443373e+05  5.428404e+00     67232   1
+-------------------------------------------------------------------
+status         : simulation_stopping
+total time (s) : 5.428404e+00
+total solves   : 67232
+best bound     :  1.443373e+05
+simulation ci  :  1.446033e+05 ± 3.621723e+03
+numeric issues : 0
+-------------------------------------------------------------------
+
+Confidence interval: 142817.79 ± 3734.74
+Lower bound: 144337.34

Plot the optimal inventory levels:

plt = SDDP.publication_plot(
+    simulations;
+    title = "x_inventory.out + u_buy.out",
+    xlabel = "Stage",
+    ylabel = "Quantity",
+    ylims = (0, 1_000),
+) do data
+    return data[:x_inventory].out + data[:u_buy].out
+end
Example block output

In the early stages, we indeed recover an order-up-to policy. However, there are end-of-horizon effects as the agent tries to optimize their decision making knowing that they have 10 realizations of demand.

Infinite horizon

We can remove the end-of-horizonn effects by considering an infinite horizon model. We assume a discount factor $\alpha=0.95$:

α = 0.95
+graph = SDDP.LinearGraph(2)
+SDDP.add_edge(graph, 2 => 2, α)
+graph
Root
+ 0
+Nodes
+ 1
+ 2
+Arcs
+ 0 => 1 w.p. 1.0
+ 1 => 2 w.p. 1.0
+ 2 => 2 w.p. 0.95

The objective in this case is to minimize the discounted expected costs over an infinite planning horizon.

model = SDDP.PolicyGraph(
+    graph;
+    sense = :Min,
+    lower_bound = 0.0,
+    optimizer = HiGHS.Optimizer,
+) do sp, t
+    @variable(sp, x_inventory >= 0, SDDP.State, initial_value = x_0)
+    @variable(sp, x_demand >= 0, SDDP.State, initial_value = 0)
+    # u_buy is a Decision-Hazard control variable. We decide u.out for use in
+    # the next stage
+    @variable(sp, u_buy >= 0, SDDP.State, initial_value = 0)
+    @variable(sp, u_sell >= 0)
+    @variable(sp, w_demand == 0)
+    @constraint(sp, x_inventory.out == x_inventory.in + u_buy.in - u_sell)
+    @constraint(sp, x_demand.out == x_demand.in + w_demand - u_sell)
+    if t == 1
+        fix(u_sell, 0; force = true)
+        @stageobjective(sp, c * u_buy.out)
+    else
+        @stageobjective(sp, c * u_buy.out + h * x_inventory.out + p * x_demand.out)
+        SDDP.parameterize(ω -> JuMP.fix(w_demand, ω), sp, Ω)
+    end
+    return
+end
+
+SDDP.train(model; iteration_limit = 400)
+simulations = SDDP.simulate(
+    model,
+    200,
+    [:x_inventory, :u_buy];
+    sampling_scheme = SDDP.InSampleMonteCarlo(;
+        max_depth = 50,
+        terminate_on_dummy_leaf = false,
+    ),
+);
-------------------------------------------------------------------
+         SDDP.jl (c) Oscar Dowson and contributors, 2017-24
+-------------------------------------------------------------------
+problem
+  nodes           : 2
+  state variables : 3
+  scenarios       : Inf
+  existing cuts   : false
+options
+  solver          : serial mode
+  risk measure    : SDDP.Expectation()
+  sampling scheme : SDDP.InSampleMonteCarlo
+subproblem structure
+  VariableRef                             : [9, 9]
+  AffExpr in MOI.EqualTo{Float64}         : [2, 2]
+  VariableRef in MOI.EqualTo{Float64}     : [1, 2]
+  VariableRef in MOI.GreaterThan{Float64} : [4, 5]
+numerical stability report
+  matrix range     [1e+00, 1e+00]
+  objective range  [1e+00, 4e+01]
+  bounds range     [0e+00, 0e+00]
+  rhs range        [0e+00, 0e+00]
+-------------------------------------------------------------------
+ iteration    simulation      bound        time (s)     solves  pid
+-------------------------------------------------------------------
+         1   1.976053e+04  3.345593e+04  6.960154e-03        85   1
+        27   4.079662e+05  2.999320e+05  1.010339e+00     13110   1
+        62   3.998361e+05  3.124508e+05  2.044865e+00     25304   1
+        83   4.808036e+05  3.126376e+05  3.073261e+00     35825   1
+       105   8.732187e+05  3.126616e+05  4.128147e+00     45528   1
+       121   7.242058e+05  3.126642e+05  5.169122e+00     53671   1
+       145   2.721555e+05  3.126649e+05  6.175422e+00     61612   1
+       167   6.178394e+05  3.126650e+05  7.271336e+00     69698   1
+       178   6.524500e+05  3.126650e+05  8.415252e+00     77437   1
+       198   7.501342e+05  3.126650e+05  9.566864e+00     84702   1
+       257   7.746053e+04  3.126650e+05  1.457950e+01    109961   1
+       311   9.295026e+05  3.126650e+05  1.960357e+01    127340   1
+       334   4.816711e+05  3.126650e+05  2.482720e+01    141328   1
+       356   6.182605e+05  3.126650e+05  3.021558e+01    151472   1
+       374   7.947658e+05  3.126650e+05  3.547974e+01    159848   1
+       396   2.336711e+05  3.126650e+05  4.054506e+01    166968   1
+       400   3.821342e+05  3.126650e+05  4.230857e+01    169114   1
+-------------------------------------------------------------------
+status         : iteration_limit
+total time (s) : 4.230857e+01
+total solves   : 169114
+best bound     :  3.126650e+05
+simulation ci  :  3.018209e+05 ± 2.740583e+04
+numeric issues : 0
+-------------------------------------------------------------------

Plot the optimal inventory levels:

plt = SDDP.publication_plot(
+    simulations;
+    title = "x_inventory.out + u_buy.out",
+    xlabel = "Stage",
+    ylabel = "Quantity",
+    ylims = (0, 1_000),
+) do data
+    return data[:x_inventory].out + data[:u_buy].out
+end
+Plots.hline!(plt, [662]; label = "Analytic solution")
Example block output

We again recover an order-up-to policy. The analytic solution is to order-up-to 662 units. We do not precisely recover this solution because we used a sample average approximation of 20 elements. If we increased the number of samples, our solution would approach the analytic solution.

diff --git a/previews/PR797/tutorial/markov_uncertainty/index.html b/previews/PR797/tutorial/markov_uncertainty/index.html index 9efa9bea9..71cf22c22 100644 --- a/previews/PR797/tutorial/markov_uncertainty/index.html +++ b/previews/PR797/tutorial/markov_uncertainty/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Markovian policy graphs

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

In our previous tutorials (An introduction to SDDP.jl and Uncertainty in the objective function), we formulated a simple hydrothermal scheduling problem with stagewise-independent random variables in the right-hand side of the constraints and in the objective function. Now, in this tutorial, we introduce some stagewise-dependent uncertainty using a Markov chain.

Formulating the problem

In this tutorial we consider a Markov chain with two climate states: wet and dry. Each Markov state is associated with an integer, in this case the wet climate state is Markov state 1 and the dry climate state is Markov state 2. In the wet climate state, the probability of the high inflow increases to 50%, and the probability of the low inflow decreases to 1/6. In the dry climate state, the converse happens. There is also persistence in the climate state: the probability of remaining in the current state is 75%, and the probability of transitioning to the other climate state is 25%. We assume that the first stage starts in the wet climate state.

Here is a picture of the model we're going to implement.

Markovian policy graph

There are five nodes in our graph. Each node is named by a tuple (t, i), where t is the stage for t=1,2,3, and i is the Markov state for i=1,2. As before, the wavy lines denote the stagewise-independent random variable.

For each stage, we need to provide a Markov transition matrix. This is an MxN matrix, where the element A[i, j] gives the probability of transitioning from Markov state i in the previous stage to Markov state j in the current stage. The first stage is special because we assume there is a "zero'th" stage which has one Markov state (the round node in the graph above). Furthermore, the number of columns in the transition matrix of a stage (i.e. the number of Markov states) must equal the number of rows in the next stage's transition matrix. For our example, the vector of Markov transition matrices is given by:

T = Array{Float64,2}[[1.0]', [0.75 0.25], [0.75 0.25; 0.25 0.75]]
3-element Vector{Matrix{Float64}}:
+

Markovian policy graphs

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

In our previous tutorials (An introduction to SDDP.jl and Uncertainty in the objective function), we formulated a simple hydrothermal scheduling problem with stagewise-independent random variables in the right-hand side of the constraints and in the objective function. Now, in this tutorial, we introduce some stagewise-dependent uncertainty using a Markov chain.

Formulating the problem

In this tutorial we consider a Markov chain with two climate states: wet and dry. Each Markov state is associated with an integer, in this case the wet climate state is Markov state 1 and the dry climate state is Markov state 2. In the wet climate state, the probability of the high inflow increases to 50%, and the probability of the low inflow decreases to 1/6. In the dry climate state, the converse happens. There is also persistence in the climate state: the probability of remaining in the current state is 75%, and the probability of transitioning to the other climate state is 25%. We assume that the first stage starts in the wet climate state.

Here is a picture of the model we're going to implement.

Markovian policy graph

There are five nodes in our graph. Each node is named by a tuple (t, i), where t is the stage for t=1,2,3, and i is the Markov state for i=1,2. As before, the wavy lines denote the stagewise-independent random variable.

For each stage, we need to provide a Markov transition matrix. This is an MxN matrix, where the element A[i, j] gives the probability of transitioning from Markov state i in the previous stage to Markov state j in the current stage. The first stage is special because we assume there is a "zero'th" stage which has one Markov state (the round node in the graph above). Furthermore, the number of columns in the transition matrix of a stage (i.e. the number of Markov states) must equal the number of rows in the next stage's transition matrix. For our example, the vector of Markov transition matrices is given by:

T = Array{Float64,2}[[1.0]', [0.75 0.25], [0.75 0.25; 0.25 0.75]]
3-element Vector{Matrix{Float64}}:
  [1.0;;]
  [0.75 0.25]
  [0.75 0.25; 0.25 0.75]
Note

Make sure to add the ' after the first transition matrix so Julia can distinguish between a vector and a matrix.

Creating a model

using SDDP, HiGHS
@@ -85,14 +85,14 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   1.875000e+04  1.991887e+03  5.798817e-03        18   1
-        40   5.000000e+03  8.072917e+03  1.367278e-01      1320   1
+         1   9.375000e+03  1.991887e+03  5.150795e-03        18   1
+        40   1.875000e+03  8.072917e+03  1.437938e-01      1320   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 1.367278e-01
+total time (s) : 1.437938e-01
 total solves   : 1320
 best bound     :  8.072917e+03
-simulation ci  :  8.463149e+03 ± 2.413376e+03
+simulation ci  :  5.917822e+03 ± 1.372472e+03
 numeric issues : 0
 -------------------------------------------------------------------

Instead of performing a Monte Carlo simulation like the previous tutorials, we may want to simulate one particular sequence of noise realizations. This historical simulation can also be conducted by passing a SDDP.Historical sampling scheme to the sampling_scheme keyword of the SDDP.simulate function.

We can confirm that the historical sequence of nodes was visited by querying the :node_index key of the simulation results.

simulations = SDDP.simulate(
     model;
@@ -106,4 +106,4 @@
 [stage[:node_index] for stage in simulations[1]]
3-element Vector{Tuple{Int64, Int64}}:
  (1, 1)
  (2, 2)
- (3, 1)
+ (3, 1)
diff --git a/previews/PR797/tutorial/mdps/index.html b/previews/PR797/tutorial/mdps/index.html index 0abfaacd7..1c2764adf 100644 --- a/previews/PR797/tutorial/mdps/index.html +++ b/previews/PR797/tutorial/mdps/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Example: Markov Decision Processes

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

SDDP.jl can be used to solve a variety of Markov Decision processes. If the problem has continuous state and control spaces, and the objective and transition function are convex, then SDDP.jl can find a globally optimal policy. In other cases, SDDP.jl will find a locally optimal policy.

A simple example

A simple demonstration of this is the example taken from page 98 of the book "Markov Decision Processes: Discrete stochastic Dynamic Programming", by Martin L. Putterman.

The example, as described in Section 4.6.3 of the book, is to minimize a sum of squares of N non-negative variables, subject to a budget constraint that the variable values add up to M. Put mathematically, that is:

\[\begin{aligned} +

Example: Markov Decision Processes

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

SDDP.jl can be used to solve a variety of Markov Decision processes. If the problem has continuous state and control spaces, and the objective and transition function are convex, then SDDP.jl can find a globally optimal policy. In other cases, SDDP.jl will find a locally optimal policy.

A simple example

A simple demonstration of this is the example taken from page 98 of the book "Markov Decision Processes: Discrete stochastic Dynamic Programming", by Martin L. Putterman.

The example, as described in Section 4.6.3 of the book, is to minimize a sum of squares of N non-negative variables, subject to a budget constraint that the variable values add up to M. Put mathematically, that is:

\[\begin{aligned} \min \;\; & \sum\limits_{i=1}^N x_i^2 \\ s.t. \;\; & \sum\limits_{i=1}^N x_i = M \\ & x_i \ge 0, \quad i \in 1,\ldots,N @@ -61,11 +61,11 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 2.499895e+01 1.562631e+00 1.620889e-02 6 1 - 40 8.333333e+00 8.333333e+00 6.741519e-01 246 1 + 1 2.499895e+01 1.562631e+00 1.634693e-02 6 1 + 40 8.333333e+00 8.333333e+00 7.224190e-01 246 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.741519e-01 +total time (s) : 7.224190e-01 total solves : 246 best bound : 8.333333e+00 simulation ci : 8.810723e+00 ± 8.167195e-01 @@ -154,14 +154,14 @@ ------------------------------------------------------------------- iteration simulation bound time (s) solves pid ------------------------------------------------------------------- - 1 0.000000e+00 1.000000e+01 4.232883e-03 11 1 - 41 7.000000e+00 6.561000e+00 6.972868e-01 2875 1 + 1 0.000000e+00 7.217100e+00 5.120039e-03 13 1 + 40 2.500000e+01 6.561000e+00 8.739021e-01 3144 1 ------------------------------------------------------------------- status : simulation_stopping -total time (s) : 6.972868e-01 -total solves : 2875 +total time (s) : 8.739021e-01 +total solves : 3144 best bound : 6.561000e+00 -simulation ci : 6.195122e+00 ± 2.675728e+00 +simulation ci : 8.075000e+00 ± 2.944509e+00 numeric issues : 0 -------------------------------------------------------------------

Simulating a cyclic policy graph requires an explicit sampling_scheme that does not terminate early based on the cycle probability:

simulations = SDDP.simulate(
     model,
@@ -179,4 +179,4 @@
 
 print(join([join(path[i, :], ' ') for i in 1:size(path, 1)], '\n'))
1 2 3 ⋅
 ⋅ ▩ 4 †
-† ⋅ 5 *
Tip

This formulation will likely struggle as the number of cells in the maze increases. Can you think of an equivalent formulation that uses fewer state variables?

+† ⋅ 5 *
Tip

This formulation will likely struggle as the number of cells in the maze increases. Can you think of an equivalent formulation that uses fewer state variables?

diff --git a/previews/PR797/tutorial/objective_states/index.html b/previews/PR797/tutorial/objective_states/index.html index 8f99ef2ee..7c8b3e268 100644 --- a/previews/PR797/tutorial/objective_states/index.html +++ b/previews/PR797/tutorial/objective_states/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Objective states

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

There are many applications in which we want to model a price process that follows some auto-regressive process. Common examples include stock prices on financial exchanges and spot-prices in energy markets.

However, it is well known that these cannot be incorporated in to SDDP because they result in cost-to-go functions that are convex with respect to some state variables (e.g., the reservoir levels) and concave with respect to other state variables (e.g., the spot price in the current stage).

To overcome this problem, the approach in the literature has been to discretize the price process in order to model it using a Markovian policy graph like those discussed in Markovian policy graphs.

However, recent work offers a way to include stagewise-dependent objective uncertainty into the objective function of SDDP subproblems. Readers are directed to the following works for an introduction:

  • Downward, A., Dowson, O., and Baucke, R. (2017). Stochastic dual dynamic programming with stagewise dependent objective uncertainty. Optimization Online. link

  • Dowson, O. PhD Thesis. University of Auckland, 2018. link

The method discussed in the above works introduces the concept of an objective state into SDDP. Unlike normal state variables in SDDP (e.g., the volume of water in the reservoir), the cost-to-go function is concave with respect to the objective states. Thus, the method builds an outer approximation of the cost-to-go function in the normal state-space, and an inner approximation of the cost-to-go function in the objective state-space.

Warning

Support for objective states in SDDP.jl is experimental. Models are considerably more computational intensive, the interface is less user-friendly, and there are subtle gotchas to be aware of. Only use this if you have read and understood the theory behind the method.

One-dimensional objective states

Let's assume that the fuel cost is not fixed, but instead evolves according to a multiplicative auto-regressive process: fuel_cost[t] = ω * fuel_cost[t-1], where ω is drawn from the sample space [0.75, 0.9, 1.1, 1.25] with equal probability.

An objective state can be added to a subproblem using the SDDP.add_objective_state function. This can only be called once per subproblem. If you want to add a multi-dimensional objective state, read Multi-dimensional objective states. SDDP.add_objective_state takes a number of keyword arguments. The two required ones are

  • initial_value: the value of the objective state at the root node of the policy graph (i.e., identical to the initial_value when defining normal state variables.

  • lipschitz: the Lipschitz constant of the cost-to-go function with respect to the objective state. In other words, this value is the maximum change in the cost-to-go function at any point in the state space, given a one-unit change in the objective state.

There are also two optional keyword arguments: lower_bound and upper_bound, which give SDDP.jl hints (importantly, not constraints) about the domain of the objective state. Setting these bounds appropriately can improve the speed of convergence.

Finally, SDDP.add_objective_state requires an update function. This function takes two arguments. The first is the incoming value of the objective state, and the second is the realization of the stagewise-independent noise term (set using SDDP.parameterize). The function should return the value of the objective state to be used in the current subproblem.

This connection with the stagewise-independent noise term means that SDDP.parameterize must be called in a subproblem that defines an objective state. Inside SDDP.parameterize, the value of the objective state to be used in the current subproblem (i.e., after the update function), can be queried using SDDP.objective_state.

Here is the full model with the objective state.

using SDDP, HiGHS
+

Objective states

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

There are many applications in which we want to model a price process that follows some auto-regressive process. Common examples include stock prices on financial exchanges and spot-prices in energy markets.

However, it is well known that these cannot be incorporated in to SDDP because they result in cost-to-go functions that are convex with respect to some state variables (e.g., the reservoir levels) and concave with respect to other state variables (e.g., the spot price in the current stage).

To overcome this problem, the approach in the literature has been to discretize the price process in order to model it using a Markovian policy graph like those discussed in Markovian policy graphs.

However, recent work offers a way to include stagewise-dependent objective uncertainty into the objective function of SDDP subproblems. Readers are directed to the following works for an introduction:

  • Downward, A., Dowson, O., and Baucke, R. (2017). Stochastic dual dynamic programming with stagewise dependent objective uncertainty. Optimization Online. link

  • Dowson, O. PhD Thesis. University of Auckland, 2018. link

The method discussed in the above works introduces the concept of an objective state into SDDP. Unlike normal state variables in SDDP (e.g., the volume of water in the reservoir), the cost-to-go function is concave with respect to the objective states. Thus, the method builds an outer approximation of the cost-to-go function in the normal state-space, and an inner approximation of the cost-to-go function in the objective state-space.

Warning

Support for objective states in SDDP.jl is experimental. Models are considerably more computational intensive, the interface is less user-friendly, and there are subtle gotchas to be aware of. Only use this if you have read and understood the theory behind the method.

One-dimensional objective states

Let's assume that the fuel cost is not fixed, but instead evolves according to a multiplicative auto-regressive process: fuel_cost[t] = ω * fuel_cost[t-1], where ω is drawn from the sample space [0.75, 0.9, 1.1, 1.25] with equal probability.

An objective state can be added to a subproblem using the SDDP.add_objective_state function. This can only be called once per subproblem. If you want to add a multi-dimensional objective state, read Multi-dimensional objective states. SDDP.add_objective_state takes a number of keyword arguments. The two required ones are

  • initial_value: the value of the objective state at the root node of the policy graph (i.e., identical to the initial_value when defining normal state variables.

  • lipschitz: the Lipschitz constant of the cost-to-go function with respect to the objective state. In other words, this value is the maximum change in the cost-to-go function at any point in the state space, given a one-unit change in the objective state.

There are also two optional keyword arguments: lower_bound and upper_bound, which give SDDP.jl hints (importantly, not constraints) about the domain of the objective state. Setting these bounds appropriately can improve the speed of convergence.

Finally, SDDP.add_objective_state requires an update function. This function takes two arguments. The first is the incoming value of the objective state, and the second is the realization of the stagewise-independent noise term (set using SDDP.parameterize). The function should return the value of the objective state to be used in the current subproblem.

This connection with the stagewise-independent noise term means that SDDP.parameterize must be called in a subproblem that defines an objective state. Inside SDDP.parameterize, the value of the objective state to be used in the current subproblem (i.e., after the update function), can be queried using SDDP.objective_state.

Here is the full model with the objective state.

using SDDP, HiGHS
 
 model = SDDP.LinearPolicyGraph(;
     stages = 3,
@@ -79,25 +79,24 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   7.672500e+03  3.738585e+03  2.257490e-02        39   1
-       198   5.671875e+03  5.092593e+03  1.022979e+00      9522   1
-       300   2.475000e+03  5.092593e+03  1.449091e+00     13800   1
+         1   6.806250e+03  4.408308e+03  2.262402e-02        39   1
+       182   7.218750e+03  5.092593e+03  7.837250e-01      8598   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 1.449091e+00
-total solves   : 13800
+total time (s) : 7.837250e-01
+total solves   : 8598
 best bound     :  5.092593e+03
-simulation ci  :  4.966233e+03 ± 4.164351e+02
+simulation ci  :  4.992895e+03 ± 5.635857e+02
 numeric issues : 0
 -------------------------------------------------------------------
 
 Finished training and simulating.

To demonstrate how the objective states are updated, consider the sequence of noise observations:

[stage[:noise_term] for stage in simulations[1]]
3-element Vector{@NamedTuple{fuel::Float64, inflow::Float64}}:
- (fuel = 0.75, inflow = 100.0)
- (fuel = 0.75, inflow = 100.0)
- (fuel = 1.25, inflow = 50.0)

This, the fuel cost in the first stage should be 0.75 * 50 = 37.5. The fuel cost in the second stage should be 1.1 * 37.5 = 41.25. The fuel cost in the third stage should be 0.75 * 41.25 = 30.9375.

To confirm this, the values of the objective state in a simulation can be queried using the :objective_state key.

[stage[:objective_state] for stage in simulations[1]]
3-element Vector{Float64}:
- 37.5
- 28.125
- 35.15625

Multi-dimensional objective states

You can construct multi-dimensional price processes using NTuples. Just replace every scalar value associated with the objective state by a tuple. For example, initial_value = 1.0 becomes initial_value = (1.0, 2.0).

Here is an example:

model = SDDP.LinearPolicyGraph(;
+ (fuel = 1.1, inflow = 50.0)
+ (fuel = 1.25, inflow = 100.0)
+ (fuel = 1.1, inflow = 50.0)

This, the fuel cost in the first stage should be 0.75 * 50 = 37.5. The fuel cost in the second stage should be 1.1 * 37.5 = 41.25. The fuel cost in the third stage should be 0.75 * 41.25 = 30.9375.

To confirm this, the values of the objective state in a simulation can be queried using the :objective_state key.

[stage[:objective_state] for stage in simulations[1]]
3-element Vector{Float64}:
+ 55.00000000000001
+ 68.75000000000001
+ 75.62500000000003

Multi-dimensional objective states

You can construct multi-dimensional price processes using NTuples. Just replace every scalar value associated with the objective state by a tuple. For example, initial_value = 1.0 becomes initial_value = (1.0, 2.0).

Here is an example:

model = SDDP.LinearPolicyGraph(;
     stages = 3,
     sense = :Min,
     lower_bound = 0.0,
@@ -171,19 +170,19 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   7.437500e+03  3.434307e+03  2.600098e-02        39   1
-       202   5.937500e+03  5.135984e+03  1.028277e+00      9978   1
-       300   1.187500e+04  5.135984e+03  1.525532e+00     13800   1
+         1   7.250000e+03  3.529412e+03  2.422404e-02        39   1
+       211   5.687500e+03  5.135984e+03  1.027903e+00     10029   1
+       290   1.150000e+04  5.135984e+03  1.379138e+00     13110   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 1.525532e+00
-total solves   : 13800
+total time (s) : 1.379138e+00
+total solves   : 13110
 best bound     :  5.135984e+03
-simulation ci  :  5.021167e+03 ± 4.471027e+02
+simulation ci  :  5.362165e+03 ± 4.590779e+02
 numeric issues : 0
 -------------------------------------------------------------------
 
 Finished training and simulating.

This time, since our objective state is two-dimensional, the objective states are tuples with two elements:

[stage[:objective_state] for stage in simulations[1]]
3-element Vector{Tuple{Float64, Float64}}:
- (40.0, 50.0)
- (40.0, 40.0)
- (50.0, 40.0)

Warnings

There are number of things to be aware of when using objective states.

  • The key assumption is that price is independent of the states and actions in the model.

    That means that the price cannot appear in any @constraints. Nor can you use any @variables in the update function.

  • Choosing an appropriate Lipschitz constant is difficult.

    The points discussed in Choosing an initial bound are relevant. The Lipschitz constant should not be chosen as large as possible (since this will help with convergence and the numerical issues discussed above), but if chosen to small, it may cut of the feasible region and lead to a sub-optimal solution.

  • You need to ensure that the cost-to-go function is concave with respect to the objective state before the update.

    If the update function is linear, this is always the case. In some situations, the update function can be nonlinear (e.g., multiplicative as we have above). In general, placing constraints on the price (e.g., clamp(price, 0, 1)) will destroy concavity. Caveat emptor. It's up to you if this is a problem. If it isn't you'll get a good heuristic with no guarantee of global optimality.

+ (55.0, 50.0) + (62.5, 55.0) + (71.25, 62.5)

Warnings

There are number of things to be aware of when using objective states.

  • The key assumption is that price is independent of the states and actions in the model.

    That means that the price cannot appear in any @constraints. Nor can you use any @variables in the update function.

  • Choosing an appropriate Lipschitz constant is difficult.

    The points discussed in Choosing an initial bound are relevant. The Lipschitz constant should not be chosen as large as possible (since this will help with convergence and the numerical issues discussed above), but if chosen to small, it may cut of the feasible region and lead to a sub-optimal solution.

  • You need to ensure that the cost-to-go function is concave with respect to the objective state before the update.

    If the update function is linear, this is always the case. In some situations, the update function can be nonlinear (e.g., multiplicative as we have above). In general, placing constraints on the price (e.g., clamp(price, 0, 1)) will destroy concavity. Caveat emptor. It's up to you if this is a problem. If it isn't you'll get a good heuristic with no guarantee of global optimality.

diff --git a/previews/PR797/tutorial/objective_uncertainty/index.html b/previews/PR797/tutorial/objective_uncertainty/index.html index f6fe18572..1ced742d7 100644 --- a/previews/PR797/tutorial/objective_uncertainty/index.html +++ b/previews/PR797/tutorial/objective_uncertainty/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Uncertainty in the objective function

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

In the previous tutorial, An introduction to SDDP.jl, we created a stochastic hydro-thermal scheduling model. In this tutorial, we extend the problem by adding uncertainty to the fuel costs.

Previously, we assumed that the fuel cost was deterministic: $50/MWh in the first stage, $100/MWh in the second stage, and $150/MWh in the third stage. For this tutorial, we assume that in addition to these base costs, the actual fuel cost is correlated with the inflows.

Our new model for the uncertainty is given by the following table:

ω123
P(ω)1/31/31/3
inflow050100
fuel multiplier1.51.00.75

In stage t, the objective is now to minimize:

fuel_multiplier * fuel_cost[t] * thermal_generation

Creating a model

To add an uncertain objective, we can simply call @stageobjective from inside the SDDP.parameterize function.

using SDDP, HiGHS
+

Uncertainty in the objective function

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

In the previous tutorial, An introduction to SDDP.jl, we created a stochastic hydro-thermal scheduling model. In this tutorial, we extend the problem by adding uncertainty to the fuel costs.

Previously, we assumed that the fuel cost was deterministic: $50/MWh in the first stage, $100/MWh in the second stage, and $150/MWh in the third stage. For this tutorial, we assume that in addition to these base costs, the actual fuel cost is correlated with the inflows.

Our new model for the uncertainty is given by the following table:

ω123
P(ω)1/31/31/3
inflow050100
fuel multiplier1.51.00.75

In stage t, the objective is now to minimize:

fuel_multiplier * fuel_cost[t] * thermal_generation

Creating a model

To add an uncertain objective, we can simply call @stageobjective from inside the SDDP.parameterize function.

using SDDP, HiGHS
 
 model = SDDP.LinearPolicyGraph(;
     stages = 3,
@@ -82,16 +82,16 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   2.500000e+04  3.958333e+03  3.963947e-03        12   1
-        40   1.875000e+03  1.062500e+04  7.159686e-02       642   1
+         1   3.750000e+04  3.958333e+03  3.509998e-03        12   1
+        60   1.125000e+04  1.062500e+04  1.048701e-01       963   1
 -------------------------------------------------------------------
 status         : simulation_stopping
-total time (s) : 7.159686e-02
-total solves   : 642
+total time (s) : 1.048701e-01
+total solves   : 963
 best bound     :  1.062500e+04
-simulation ci  :  1.044969e+04 ± 2.365515e+03
+simulation ci  :  1.142388e+04 ± 2.185147e+03
 numeric issues : 0
 -------------------------------------------------------------------
 
-Confidence interval: 10598.75 ± 730.45
-Lower bound: 10625.0
+Confidence interval: 10605.0 ± 707.79 +Lower bound: 10625.0
diff --git a/previews/PR797/tutorial/pglib_opf/index.html b/previews/PR797/tutorial/pglib_opf/index.html index dae7037f0..d6beade91 100644 --- a/previews/PR797/tutorial/pglib_opf/index.html +++ b/previews/PR797/tutorial/pglib_opf/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Alternative forward models

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example demonstrates how to train convex and non-convex models.

This example uses the following packages:

using SDDP
+

Alternative forward models

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

This example demonstrates how to train convex and non-convex models.

This example uses the following packages:

using SDDP
 import Ipopt
 import PowerModels
 import Test

Formulation

For our model, we build a simple optimal power flow model with a single hydro-electric generator.

The formulation of our optimal power flow problem depends on model_type, which must be one of the PowerModels formulations.

(To run locally, download pglib_opf_case5_pjm.m and update filename appropriately.)

function build_model(model_type)
@@ -61,24 +61,24 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   2.369329e+06  8.292865e+04  1.862850e-01        47   1
-         3   1.048214e+06  3.234043e+05  1.517102e+00       253   1
-         8   1.747989e+04  3.756159e+05  2.520988e+00       432   1
-        10   2.028528e+05  3.857742e+05  2.873698e+00       486   1
+         1   1.458563e+07  3.264622e+04  1.079772e+00       271   1
+         3   1.883248e+06  9.059830e+04  2.227736e+00       461   1
+         8   4.978020e+05  3.616759e+05  3.418261e+00       636   1
+        10   4.551755e+06  3.694773e+05  5.206153e+00       930   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 2.873698e+00
-total solves   : 486
-best bound     :  3.857742e+05
-simulation ci  :  5.430437e+05 ± 4.659585e+05
+total time (s) : 5.206153e+00
+total solves   : 930
+best bound     :  3.694773e+05
+simulation ci  :  2.225401e+06 ± 2.832083e+06
 numeric issues : 0
 -------------------------------------------------------------------

To more accurately simulate the dynamics of the problem, a common approach is to write the cuts representing the policy to a file, and then read them into a non-convex model:

SDDP.write_cuts_to_file(convex, "convex.cuts.json")
 non_convex = build_model(PowerModels.ACPPowerModel)
 SDDP.read_cuts_from_file(non_convex, "convex.cuts.json")

Now we can simulate non_convex to evaluate the policy.

result = SDDP.simulate(non_convex, 1)
1-element Vector{Vector{Dict{Symbol, Any}}}:
- [Dict(:bellman_term => 364406.5063028627, :noise_term => 5, :node_index => 1, :stage_objective => 17578.224508661744, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 371269.1430658639, :noise_term => 0, :node_index => 1, :stage_objective => 17578.224508663472, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 370800.30517831264, :noise_term => 5, :node_index => 1, :stage_objective => 17578.22450866323, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 374730.3520810888, :noise_term => 2, :node_index => 1, :stage_objective => 17578.224508667332, :objective_state => nothing, :belief => Dict(1 => 1.0))]

A problem with reading and writing the cuts to file is that the cuts have been generated from trial points of the convex model. Therefore, the policy may be arbitrarily bad at points visited by the non-convex model.

Training a non-convex model

We can also build and train a non-convex formulation of the optimal power flow problem.

The problem with the non-convex model is that because it is non-convex, SDDP.jl may find a sub-optimal policy. Therefore, it may over-estimate the true cost of operation.

non_convex = build_model(PowerModels.ACPPowerModel)
+ [Dict(:bellman_term => 342080.8544885984, :noise_term => 5, :node_index => 1, :stage_objective => 21433.37543450599, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 348248.7384491772, :noise_term => 0, :node_index => 1, :stage_objective => 21433.37543450599, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 354416.622409756, :noise_term => 0, :node_index => 1, :stage_objective => 21433.37543450602, :objective_state => nothing, :belief => Dict(1 => 1.0))]

A problem with reading and writing the cuts to file is that the cuts have been generated from trial points of the convex model. Therefore, the policy may be arbitrarily bad at points visited by the non-convex model.

Training a non-convex model

We can also build and train a non-convex formulation of the optimal power flow problem.

The problem with the non-convex model is that because it is non-convex, SDDP.jl may find a sub-optimal policy. Therefore, it may over-estimate the true cost of operation.

non_convex = build_model(PowerModels.ACPPowerModel)
 SDDP.train(non_convex; iteration_limit = 10)
 result = SDDP.simulate(non_convex, 1)
1-element Vector{Vector{Dict{Symbol, Any}}}:
- [Dict(:bellman_term => 336580.4788944915, :noise_term => 2, :node_index => 1, :stage_objective => 17566.78208205367, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 339552.42873898963, :noise_term => 2, :node_index => 1, :stage_objective => 17566.78208229366, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 342754.270375492, :noise_term => 0, :node_index => 1, :stage_objective => 21088.17600366217, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 342754.2703924718, :noise_term => 2, :node_index => 1, :stage_objective => 23580.697409811142, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 422553.1381115723, :noise_term => 0, :node_index => 1, :stage_objective => 27420.55350575335, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 586417.1156925573, :noise_term => 0, :node_index => 1, :stage_objective => 27420.553514899035, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 342754.2703751937, :noise_term => 5, :node_index => 1, :stage_objective => 19740.532670530545, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 342411.08429137414, :noise_term => 5, :node_index => 1, :stage_objective => 17566.782084086888, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 342067.8982245199, :noise_term => 5, :node_index => 1, :stage_objective => 17566.78208328695, :objective_state => nothing, :belief => Dict(1 => 1.0))]

Combining convex and non-convex models

To summarize, training with the convex model constructs cuts at points that may never be visited by the non-convex model, and training with the non-convex model may construct arbitrarily poor cuts because a key assumption of SDDP is convexity.

As a compromise, we can train a policy using a combination of the convex and non-convex models; we'll use the non-convex model to generate trial points on the forward pass, and we'll use the convex model to build cuts on the backward pass.

convex = build_model(PowerModels.DCPPowerModel)
A policy graph with 1 nodes.
+ [Dict(:bellman_term => 375791.6979525628, :noise_term => 0, :node_index => 1, :stage_objective => 17587.191705451, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 375217.62261525623, :noise_term => 5, :node_index => 1, :stage_objective => 17587.19170532613, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 381900.6946910325, :noise_term => 0, :node_index => 1, :stage_objective => 19334.84186359492, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 381900.69504936336, :noise_term => 2, :node_index => 1, :stage_objective => 23580.696852984325, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 381326.6197011669, :noise_term => 5, :node_index => 1, :stage_objective => 17587.191715384586, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 397062.19904323446, :noise_term => 0, :node_index => 1, :stage_objective => 27420.553501907507, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 381900.6950495868, :noise_term => 2, :node_index => 1, :stage_objective => 24726.958044935694, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 428274.6140283388, :noise_term => 0, :node_index => 1, :stage_objective => 27420.553505640808, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 381900.69504977425, :noise_term => 2, :node_index => 1, :stage_objective => 25694.765153677778, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 428274.6140398273, :noise_term => 0, :node_index => 1, :stage_objective => 27420.553505640808, :objective_state => nothing, :belief => Dict(1 => 1.0))  …  Dict(:bellman_term => 381900.69504936336, :noise_term => 2, :node_index => 1, :stage_objective => 23580.69745454595, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 381326.61970116687, :noise_term => 5, :node_index => 1, :stage_objective => 17587.191715384564, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 381900.6950491739, :noise_term => 2, :node_index => 1, :stage_objective => 22615.070526637784, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 381900.69504936336, :noise_term => 2, :node_index => 1, :stage_objective => 23580.69745454595, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 381900.69504936336, :noise_term => 2, :node_index => 1, :stage_objective => 23580.697454861107, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 428274.6140146454, :noise_term => 0, :node_index => 1, :stage_objective => 27420.553505640808, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 381900.69504977425, :noise_term => 2, :node_index => 1, :stage_objective => 25694.765153305256, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 381326.61970157304, :noise_term => 5, :node_index => 1, :stage_objective => 17587.19171538458, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 380752.5443591121, :noise_term => 5, :node_index => 1, :stage_objective => 17587.19171048037, :objective_state => nothing, :belief => Dict(1 => 1.0)), Dict(:bellman_term => 384275.5054891627, :noise_term => 0, :node_index => 1, :stage_objective => 27420.553502152976, :objective_state => nothing, :belief => Dict(1 => 1.0))]

Combining convex and non-convex models

To summarize, training with the convex model constructs cuts at points that may never be visited by the non-convex model, and training with the non-convex model may construct arbitrarily poor cuts because a key assumption of SDDP is convexity.

As a compromise, we can train a policy using a combination of the convex and non-convex models; we'll use the non-convex model to generate trial points on the forward pass, and we'll use the convex model to build cuts on the backward pass.

convex = build_model(PowerModels.DCPPowerModel)
A policy graph with 1 nodes.
  Node indices: 1
 
non_convex = build_model(PowerModels.ACPPowerModel)
A policy graph with 1 nodes.
  Node indices: 1
@@ -113,16 +113,16 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   2.863871e+06  5.013490e+04  2.625811e-01        45   1
-         4   6.673999e+05  2.980236e+05  2.195032e+00       240   1
-         7   3.701837e+05  3.704299e+05  3.682911e+00       399   1
-         9   4.233725e+05  4.060326e+05  4.949868e+00       525   1
-        10   1.905239e+05  4.136109e+05  5.250140e+00       555   1
+         1   1.616405e+06  6.633473e+04  2.201200e-01        30   1
+         3   7.336368e+05  2.131314e+05  1.479425e+00       141   1
+         7   1.613523e+06  3.688984e+05  4.756617e+00       387   1
+         8   1.001434e+07  3.810947e+05  7.486794e+00       564   1
+        10   1.367906e+06  3.877966e+05  1.048707e+01       783   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 5.250140e+00
-total solves   : 555
-best bound     :  4.136109e+05
-simulation ci  :  8.251294e+05 ± 5.685420e+05
+total time (s) : 1.048707e+01
+total solves   : 783
+best bound     :  3.877966e+05
+simulation ci  :  1.638935e+06 ± 1.863579e+06
 numeric issues : 0
--------------------------------------------------------------------

In practice, if we were to simulate non_convex now, we should obtain a better policy than either of the two previous approaches.

+-------------------------------------------------------------------

In practice, if we were to simulate non_convex now, we should obtain a better policy than either of the two previous approaches.

diff --git a/previews/PR797/tutorial/plotting/a25a83af.svg b/previews/PR797/tutorial/plotting/f424d521.svg similarity index 84% rename from previews/PR797/tutorial/plotting/a25a83af.svg rename to previews/PR797/tutorial/plotting/f424d521.svg index b708ba41a..f05ab6755 100644 --- a/previews/PR797/tutorial/plotting/a25a83af.svg +++ b/previews/PR797/tutorial/plotting/f424d521.svg @@ -1,84 +1,84 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR797/tutorial/plotting/index.html b/previews/PR797/tutorial/plotting/index.html index 953bc1c2d..dffdb320a 100644 --- a/previews/PR797/tutorial/plotting/index.html +++ b/previews/PR797/tutorial/plotting/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Plotting tools

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

In our previous tutorials, we formulated, solved, and simulated multistage stochastic optimization problems. However, we haven't really investigated what the solution looks like. Luckily, SDDP.jl includes a number of plotting tools to help us do that. In this tutorial, we explain the tools and make some pretty pictures.

Preliminaries

The next two plot types help visualize the policy. Thus, we first need to create a policy and simulate some trajectories. So, let's take the model from Markovian policy graphs, train it for 20 iterations, and then simulate 100 Monte Carlo realizations of the policy.

using SDDP, HiGHS
+

Plotting tools

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

In our previous tutorials, we formulated, solved, and simulated multistage stochastic optimization problems. However, we haven't really investigated what the solution looks like. Luckily, SDDP.jl includes a number of plotting tools to help us do that. In this tutorial, we explain the tools and make some pretty pictures.

Preliminaries

The next two plot types help visualize the policy. Thus, we first need to create a policy and simulate some trajectories. So, let's take the model from Markovian policy graphs, train it for 20 iterations, and then simulate 100 Monte Carlo realizations of the policy.

using SDDP, HiGHS
 
 Ω = [
     (inflow = 0.0, fuel_multiplier = 1.5),
@@ -76,14 +76,14 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   2.812500e+04  1.991887e+03  1.471901e-02        18   1
-        20   1.875000e+03  8.072917e+03  4.870701e-02       360   1
+         1   2.812500e+04  1.991887e+03  1.481318e-02        18   1
+        20   1.125000e+04  8.072917e+03  5.060315e-02       360   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 4.870701e-02
+total time (s) : 5.060315e-02
 total solves   : 360
 best bound     :  8.072917e+03
-simulation ci  :  8.800475e+03 ± 2.725185e+03
+simulation ci  :  1.082898e+04 ± 2.947323e+03
 numeric issues : 0
 -------------------------------------------------------------------
 
@@ -106,4 +106,4 @@
     xlabel = "Stage",
     ylims = (0, 200),
     layout = (1, 2),
-)
Example block output

You can save this plot as a PDF using the Plots.jl function savefig:

Plots.savefig("my_picture.pdf")

Plotting the value function

You can obtain an object representing the value function of a node using SDDP.ValueFunction.

V = SDDP.ValueFunction(model[(1, 1)])
A value function for node (1, 1)

The value function can be evaluated using SDDP.evaluate.

SDDP.evaluate(V; volume = 1)
(23019.270833333332, Dict(:volume => -157.8125))

evaluate returns the height of the value function, and a subgradient with respect to the convex state variables.

You can also plot the value function using SDDP.plot

SDDP.plot(V, volume = 0:200, filename = "value_function.html")

This should open a webpage that looks like this one.

Convergence dashboard

If the text-based logging isn't to your liking, you can open a visualization of the training by passing dashboard = true to SDDP.train.

SDDP.train(model; dashboard = true)

By default, dashboard = false because there is an initial overhead associated with opening and preparing the plot.

Warning

The dashboard is experimental. There are known bugs associated with it, e.g., SDDP.jl#226.

+)
Example block output

You can save this plot as a PDF using the Plots.jl function savefig:

Plots.savefig("my_picture.pdf")

Plotting the value function

You can obtain an object representing the value function of a node using SDDP.ValueFunction.

V = SDDP.ValueFunction(model[(1, 1)])
A value function for node (1, 1)

The value function can be evaluated using SDDP.evaluate.

SDDP.evaluate(V; volume = 1)
(23019.270833333332, Dict(:volume => -157.8125))

evaluate returns the height of the value function, and a subgradient with respect to the convex state variables.

You can also plot the value function using SDDP.plot

SDDP.plot(V, volume = 0:200, filename = "value_function.html")

This should open a webpage that looks like this one.

Convergence dashboard

If the text-based logging isn't to your liking, you can open a visualization of the training by passing dashboard = true to SDDP.train.

SDDP.train(model; dashboard = true)

By default, dashboard = false because there is an initial overhead associated with opening and preparing the plot.

Warning

The dashboard is experimental. There are known bugs associated with it, e.g., SDDP.jl#226.

diff --git a/previews/PR797/tutorial/spaghetti_plot.html b/previews/PR797/tutorial/spaghetti_plot.html index 54bc7a3d7..4489704a9 100644 --- a/previews/PR797/tutorial/spaghetti_plot.html +++ b/previews/PR797/tutorial/spaghetti_plot.html @@ -230,7 +230,7 @@
diff --git a/previews/PR797/tutorial/warnings/index.html b/previews/PR797/tutorial/warnings/index.html index cf631c313..a6c799a12 100644 --- a/previews/PR797/tutorial/warnings/index.html +++ b/previews/PR797/tutorial/warnings/index.html @@ -3,7 +3,7 @@ function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-HZQQDVMPZW', {'page_path': location.pathname + location.search + location.hash}); -

Words of warning

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

SDDP is a powerful solution technique for multistage stochastic programming. However, there are a number of subtle things to be aware of before creating your own models.

Relatively complete recourse

Models built in SDDP.jl need a property called relatively complete recourse.

One definition of relatively complete recourse is that all feasible decisions (not necessarily optimal) in a subproblem lead to feasible decisions in future subproblems.

For example, in the following problem, one feasible first stage decision is x.out = 0. But this causes an infeasibility in the second stage which requires x.in >= 1. This will throw an error about infeasibility if you try to solve.

using SDDP, HiGHS
+

Words of warning

This tutorial was generated using Literate.jl. Download the source as a .jl file. Download the source as a .ipynb file.

SDDP is a powerful solution technique for multistage stochastic programming. However, there are a number of subtle things to be aware of before creating your own models.

Relatively complete recourse

Models built in SDDP.jl need a property called relatively complete recourse.

One definition of relatively complete recourse is that all feasible decisions (not necessarily optimal) in a subproblem lead to feasible decisions in future subproblems.

For example, in the following problem, one feasible first stage decision is x.out = 0. But this causes an infeasibility in the second stage which requires x.in >= 1. This will throw an error about infeasibility if you try to solve.

using SDDP, HiGHS
 
 model = SDDP.LinearPolicyGraph(;
     stages = 2,
@@ -22,7 +22,7 @@
 
   Termination status : INFEASIBLE
   Primal status      : NO_SOLUTION
-  Dual status        : NO_SOLUTION.
+  Dual status        : INFEASIBILITY_CERTIFICATE.
 
 The current subproblem was written to `subproblem_2.mof.json`.
 
@@ -89,11 +89,11 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   6.500000e+00  3.000000e+00  3.302097e-03         6   1
-         5   3.500000e+00  3.500000e+00  6.214142e-03        30   1
+         1   6.500000e+00  3.000000e+00  3.134012e-03         6   1
+         5   3.500000e+00  3.500000e+00  5.883932e-03        30   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 6.214142e-03
+total time (s) : 5.883932e-03
 total solves   : 30
 best bound     :  3.500000e+00
 simulation ci  :  4.100000e+00 ± 1.176000e+00
@@ -134,13 +134,13 @@
 -------------------------------------------------------------------
  iteration    simulation      bound        time (s)     solves  pid
 -------------------------------------------------------------------
-         1   6.500000e+00  1.100000e+01  3.272057e-03         6   1
-         5   5.500000e+00  1.100000e+01  5.818129e-03        30   1
+         1   6.500000e+00  1.100000e+01  2.928972e-03         6   1
+         5   5.500000e+00  1.100000e+01  5.285025e-03        30   1
 -------------------------------------------------------------------
 status         : iteration_limit
-total time (s) : 5.818129e-03
+total time (s) : 5.285025e-03
 total solves   : 30
 best bound     :  1.100000e+01
 simulation ci  :  5.700000e+00 ± 3.920000e-01
 numeric issues : 0
--------------------------------------------------------------------

How do we tell which is more appropriate? There are a few clues that you should look out for.

  • The bound converges to a value above (if minimizing) the simulated cost of the policy. In this case, the problem is deterministic, so it is easy to tell. But you can also check by performing a Monte Carlo simulation like we did in An introduction to SDDP.jl.

  • The bound converges to different values when we change the bound. This is another clear give-away. The bound provided by the user is only used in the initial iterations. It should not change the value of the converged policy. Thus, if you don't know an appropriate value for the bound, choose an initial value, and then increase (or decrease) the value of the bound to confirm that the value of the policy doesn't change.

  • The bound converges to a value close to the bound provided by the user. This varies between models, but notice that 11.0 is quite close to 10.0 compared with 3.5 and 0.0.

+-------------------------------------------------------------------

How do we tell which is more appropriate? There are a few clues that you should look out for.

  • The bound converges to a value above (if minimizing) the simulated cost of the policy. In this case, the problem is deterministic, so it is easy to tell. But you can also check by performing a Monte Carlo simulation like we did in An introduction to SDDP.jl.

  • The bound converges to different values when we change the bound. This is another clear give-away. The bound provided by the user is only used in the initial iterations. It should not change the value of the converged policy. Thus, if you don't know an appropriate value for the bound, choose an initial value, and then increase (or decrease) the value of the bound to confirm that the value of the policy doesn't change.

  • The bound converges to a value close to the bound provided by the user. This varies between models, but notice that 11.0 is quite close to 10.0 compared with 3.5 and 0.0.