-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: add get_schedule for asset schedules #1086
base: feature/api/endpoint-for-scheduling-asset
Are you sure you want to change the base?
feature: add get_schedule for asset schedules #1086
Conversation
Signed-off-by: F.N. Claessen <[email protected]>
Signed-off-by: F.N. Claessen <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome!
Basically pointing some low effort improvements for the test.
url_for( | ||
"SensorAPI:get_schedule", id=sensor.id, uuid=scheduling_job.id | ||
), # todo: use (last?) job_id from trigger response | ||
url_for("AssetAPI:get_schedule", id=sensor.id, uuid=job_id), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we can try to get the schedules right before letting the worker take the jobs (Line 85). Also, before the jobs are enqueued to test the error of a non-existent job.
@@ -312,6 +312,9 @@ def create_sequential_scheduling_job( | |||
connection=current_app.queues["scheduling"].connection, | |||
) | |||
|
|||
# Stand-in for MultiStorageScheduler |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
I wonder if we should reflect that the output of the schedules used a sequential approach..
consumption_schedule = consumption_schedule[ | ||
start : start + duration - resolution | ||
] | ||
sensor_schedule_response = dict( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this one include the scheduler_info
as well? That way we can detect which schedules were fallback and which not.
) | ||
|
||
overall_schedule_response = [] | ||
for child_job in job.fetch_dependencies(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks that there's is quite a large overlap between this logic and the one found in the sensor get_schedule endpoint. I'd say to keep it like that for now given that this endpoint might still get some changes when we introduce the non-sequential algorithm.
On the side, I wonder about the case where we do a centralized planning and we want the assets to get their schedules. Maybe in this case the devices can query the schedule "blindly" with certain frequency. |
If we do a centralized planning, we plan them all at once, so either none of them are done or all of them are done, right? |
Yes but the planning needs to happen from only one point. I'm thinking in the case where we do trigger the plan on our side (for example via a CLI on a cronjob) and the devices request their schedules. This is one idea we had for TUNES. |
Oh yes, you are absolutely right. Thanks for jolting my memory. It should be made possible for the individual devices to get their own schedule using the job ID of the collective scheduling job. I can make that happen, probably by adding a few lines on the original get_schedule endpoint. |
Description
Look & Feel
How to test
Further Improvements
Related Items
Follow-up to #1065.