Replies: 2 comments 2 replies
-
I assume you are talking about the posterior variance? It is approximately true when the likelihood noise is much smaller than the GP covariance, but in general it does not hold. To see this, note that the posterior predictive variance on the training data is given by |
Beta Was this translation helpful? Give feedback.
-
Ah sorry I misunderstood the question. You're right, the predicted variance should not be smaller than the likelihood noise. That being said, Please post a complete code example reproducing the issue. |
Beta Was this translation helpful? Give feedback.
-
Hello everyone, this might be a bit of a naive question but I'm struggling with this and I cannot find any information.
We can say, I have some observations with known heteroskedatic noise as a training set and thus I am using an ExactGP with FixedNoiseGaussianLikelihood.
In my theoretical understanding of GPR, if I fix the variance of the likelihood (which is the noise), then in training points the variance of the GP should be "almost equal" to the likelihood noise. This is a very relevant property for me.
This seems not to be the case when using an ExactGP with FixedNoiseGaussianLikelihood. Is there something I'm missing about how GPyTorch handles variance prediction? Am I doing something wrong at the implementation level? What do you think?
Here is what I get for one of my training points (standard deviation):
my measurement tolerance: 0.1
likelihood noise ( by model.likelihood.noise.sqrt() ): 0.1
predicted std ( by likelihood(model(point)).variance.sqrt() ) : 0.001
I am using a basic RBF anisotropic kernel + constant mean if that is useful.
Beta Was this translation helpful? Give feedback.
All reactions