-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continuity of prediction #1254
Comments
Hello! Any particular reason why you skip 2023 entirely? Are you trying to fit a model that predicts with 1 year of offset? |
A year is a rather extreme time and I would like to ask if such a solution exists? In my forecasting task, I need to forecast a 24 hour period for a site using a site's historical LABEL and weather data for the next 24 hours. This may not be very flexible? My personal thought is that for a point in time where the model predicts based on Hist_len's labels and Futr_len's exogenous variables, it may be more generalisable. Also, in other projects I've been involved in, such as PV forecasting, it's recent history labels are often not available during the pre-study phase, which means it's hard for me to use this model. Your help would be appreciated! |
What I want to express is that if I set the historical input size of the model as (30, 24, f1)(days, hour, hist_fea_sum), the future input size as (24, f2)(hour, fut_fea_sum), and the model output as (24, 1), then I only need to ensure that there is no missing historical input and future input at the time of prediction to ensure the output. At this point, the training set test set continuously increases the difficulty, and whether it is unnecessary |
I may have found a solution: Modified code But I found that the results of running the modified code will change Result after modification: |
Hello! I'm not entirely sure of what your question is here, but you seem to have found a solution. Do you need more help, or can we close this issue? |
yeah. Library code provides predictions for the future based on the model, recent labels (discontinuous labels), and future prediction data (modified code above). However, I have found that, all else being equal, using the modified code yields results that are inconsistent with the original code and do not fit as well as the original code; this is the problem I am currently experiencing. Having had a nice holiday recently, the Spring Festival, I will try to solve this problem as soon as possible from today. |
The inconsistency between the two results was found to be due to the different length of the input historical data, which resulted in different normalisation results. If the predict_step function of the BaseRecurrent class in neuralforecast/common/_base_recurrent.py adds a data length limit to the raw data. Thank you very much, the code works for my needs for now (the model performance changes are not due to me changing the code.) Finally, I will use my free time to investigate why different lengths of historical data have different results after normalisation |
What happened + What you expected to happen
What happened:
Currently the prediction must be required to be continuous with the training set. I want to use data of 2022 for training, and data of 2024 for test. It cannot work for me.
What I expected to happen:
"I want to use data of 2022 for training, and data of 2024 for test", Is there have some solutions for it. Thanks for your help.
What I have tried:
I've tried to work on the source code to solve this myself, and after spending a lot of time on it didn't solve the problem
Versions / Dependencies
deault version of pip.
Reproduction script
related code for "I want to use data of 2022 for training, and data of 2024 for test"
Issue Severity
None
The text was updated successfully, but these errors were encountered: