Replies: 2 comments 2 replies
-
AFAIK, the marginal log likelihood is not a good measure of performance accross different kernel choices. Instead, one should compare a proper metric like MSE or something that also takes the uncertainty into account (docs on metrics in GPyTorch). |
Beta Was this translation helpful? Give feedback.
1 reply
-
The spectral mixture kernel is very sensitive to initialization, and optimization can be tricky. Sometimes the I would try doing 100 random initializations, and seeing which one works best, especially using the metrics that @Turakar suggests. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello @gpleiss. I trained the Multi-task GP model with RQKernel, and got decent results; the negative mll went down to -6.25. I read some of the papers of AG Wilson wherein he said that RBF (and some popular) kernels do not discover patterns; they are just filters. Also, you recommended that I should read one of the papers of AG Wilson related to Pattern discovery and extrapolation.
initialize_from_data
method. Still the negative mll getting stuck around 15.69 after 20 epochs and never improves after that.SpectralMixtureKernel
, etc.). Other than that, I kept everything the same (like inducing points, etc.) to have a fair comparison.Any practical advice to get better performance?
RQKernel
orSpectralMixtureKernel
depending onlearnKernel
flag.SpectralMixtureKernel
parameters. The Functional Kernel Learning paper also discuss some initialization related issues in its Appendix D. That is why, I was careful with the initialization.Beta Was this translation helpful? Give feedback.
All reactions