You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently running forecast on a mable fails if any of the models errors out when being forecasted. Is there a way to either drop it completely or return some "NULL forecast" from such a model?
We run batches of 1000 models which take about 20 minutes to compute and then immediately the whole job fails with an error on just one bad model.
Basically, a feature similar to a bad model train function returning a NULL model instead of killing the entire process.
The text was updated successfully, but these errors were encountered:
Interesting, I'm surprised that a forecast would fail if a model was successful.
I think the batch handling of errors can be improved in general, applying the technique consistently to all modelling methods.
I forgot to add that this is not a model from fable but our own we built in-house (thanks for making fabletools so flexible!). So it's quite buggy, but in general if it fails it fails on something like division by 0 or something or other being NaN which 99% of the time means the input time series was deficient and we don't want to bother with it.
Our scope is around 2 million time series forecast for long term planning so if 0.1% fails we don't really care that much, especially if next day the data will be fixed.
In our case it is so much more painful in that the models are training for about 10-30 minutes depending on the selection of series (as I said, we batch them at 1000 per job) but the forecast step only takes about 1 minute, so it's a lot of work wasted.
Other option would be to somehow model + forecast in one step, i.e. instead of doing:
1000x model
1000x forecast
run it like
1000x (model + forecast)
but this would arguably be too much change and currently the pipeline is elegant in that at each step I get something which I can reason about.
Some graceful fail/fallback on a forecast method error would be more than sufficient. Especially when running at scale.
Currently running forecast on a mable fails if any of the models errors out when being forecasted. Is there a way to either drop it completely or return some "NULL forecast" from such a model?
We run batches of 1000 models which take about 20 minutes to compute and then immediately the whole job fails with an error on just one bad model.
Basically, a feature similar to a bad model train function returning a NULL model instead of killing the entire process.
The text was updated successfully, but these errors were encountered: