-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cut sharing with Markov policy graph #796
Comments
What is the issue exactly? "strange behavior" isn't very descriptive |
Oooo, now I understand. The node will have a cached dual solution from a previous solve, but the incoming state variables won't line up. |
That makes sense. I had tried to fix it myself, but decided to just post here. |
I thought I'd fixed the 0 probability arc thing. It's come up before. I can't immediately find the issue though |
Note to self: it's probably sufficient to just exclude zero-probability arcs here: Lines 57 to 80 in 4091155
|
I've got a SDDP model with a Markov chain used to define states and the transitions between them at each stage.
After noticing some strange behaviour in the simulated policy, the model was retrained with
refine_at_similar_nodes
set tofalse
. This resolved the issue.I've tracked the cause of the problem to there being 0 probabilities between some pairs of states at various stages. This seems to lead to SDDP not solving the model for those states, but the dual variable from the unsolved models are still being used to form cuts at other nodes in the policy graph.
I can't post a MFE here, but the above description will hopefully allow this issue to be reproduced.
The text was updated successfully, but these errors were encountered: