Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Periodical SDDP #802

Closed
FSchmidtDIW opened this issue Nov 9, 2024 · 2 comments
Closed

Periodical SDDP #802

FSchmidtDIW opened this issue Nov 9, 2024 · 2 comments

Comments

@FSchmidtDIW
Copy link

Hi Oscar,

I am dealing with an end-of-horizon effect in a capacity expansion problem.
Currently, I have added a soft constraint to the final node that penalises shortfalls from a target storage level decided along with the capacities in node 1. This is rather restrictive. I want to explore other options. Unfortunately, an infinite-horizon reformulation with a unicyclic graph is in all likelihood computationally intractable given my discount rate and the complexity of the subproblems.

Alternatively, I came across the Shapiro and Ding (2020) paper. Unless I am misunderstanding they suggest for periodical problems to solve an additional node at the end of the horizon. So if I normally modeled a year with an investment stage and 12 monthly operational stages from Jan to Dec, I would add another Jan at the end. They then add the cutting plane model of operational node 1 (first Jan) to operational node 13 (second Jan).

How "simple" would it be to add the cutting plane model of one stage to another stage in SDDP.jl? Is that something I could do in a new ForwardPass?

Thanks a lot,

Felix

@odow
Copy link
Owner

odow commented Nov 10, 2024

Shapiro's "periodical SDDP" is just SDDP with a policy graph that forms a single cycle (our unicyclic graph).

If you don't care about the global optimal solution, then just train a heuristic policy.

It sounds like your idea is to have a unicyclic graph with 12 nodes, and then train with:

SDDP.train(
    model;
    sampling_scheme = SDDP.InSampleMonteCarlo(;
        max_depth::Int = 13,
    ),
)

Another option would be to just train with a smaller discount factor (higher interest rate), and then simulate the policy with the true factor.

@FSchmidtDIW
Copy link
Author

Ah, of course! This makes sense. Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants