You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I suspect a bug around the binomial negative. Indeed, performance seems to be off compared with other available distributions, even when faced with positive count data on which it is supposed to be efficient.
I'm bringing this up again because it's becoming a sticking point for me: I need to get output samples and not quantiles. And in my field we're dealing with "count data" (i.e. positive integers), and historically we've played a lot with Tweedie and NegativeBinomial :(
I've tried to identify the problem by also looking at NBMM, but it seems to be facing the same problem overall. In my opinion it looks to be correlated to the scaling of the data in some way, as the results are even more catastrophic compared to other distributions with scaler="identity" (with NHITS for example).
If you even have a hunch, I could take the time to deep dive if need be.
What happened + What you expected to happen
I suspect a bug around the binomial negative. Indeed, performance seems to be off compared with other available distributions, even when faced with positive count data on which it is supposed to be efficient.
Perhaps a conflict with the way the input data is scaled? I know that on Pytorch-Forecasting, they block the use of negative binomial when applying centered normalization: https://pytorch-forecasting.readthedocs.io/en/stable/_modules/pytorch_forecasting/metrics/distributions.html#NegativeBinomialDistributionLoss
I can't share the results on my data, but I've coded a quick example that illustrates the problem.
Versions / Dependencies
neuralforecast==1.7.4
torch==2.3.1+cu121
Reproduction script
Output:

Issue Severity
Medium: It is a significant difficulty but I can work around it.
The text was updated successfully, but these errors were encountered: