Replies: 1 comment
-
I'm not too surprised by this finding, given your data. Exact GP inference is very efficient when you are conditioning on only 100 datapoints, so you won't see any speedups from any scalable method.
This sounds like some numerical instability. You could try training these models in double precision (i.e. |
Beta Was this translation helpful? Give feedback.
-
Hello,
I was wondering if anyone could provide insight on the expected difference between using Stochastic Variational GP Regression and Exact GP Regression with the IndependentModelList container in GPyTorch. I have datasets with a large number of time series that I would like to be able to fit using both approaches. I have done some testing and it appears that using the Stochastic Variational GP Regression is actually slower and uses even more memory than the Exact GP Regression.
My datasets often have a number of time series between 300 and 5000 and each time series is around 300 points. Could this be because I am not defining a big enough number of time points? I have also noticed that for larger datasets, the Stochastic Variational GP Regression often gives an output that the matrix is not positive definite. Is there a specific reason for this? Thank you for any insight you can provide.
Below you a toy setup that I tried to build to make it more clear.
Creating synthetic data:
Defining ExactGP and using the
IndependentModelList
container:Training:
I get the following output:
And one time series looks like this:
For the SVGP I defined it as:
I also created a class
SumVariationalELBO
that following the same logic as the marginal likelihood one:Then, I created the inducing points:
And run the exact same training procedure; the only change was the mll.
In terms of time and memory the output was the following:
And one time series looks like:
Beta Was this translation helpful? Give feedback.
All reactions