Reasonable Expectations in Parametric GP vs Exact GP performance #1559
dangthatsright
started this conversation in
General
Replies: 1 comment 4 replies
-
This is definitely dataset dependent. If you are getting such a high NLL with an exact GP, this suggests that there is not much signal in your data, or you are not using a good kernel for your data. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I'm looking at https://arxiv.org/pdf/1910.07123.pdf and I found the results very promising. Specifically the better nll values would be very helpful in my use case. However, when trying these out on my (2) datasets, I get around NLL=1.0 for Exact GPs and NLL=1.3 for Parametric GPs for both datasets. I would like to know if I am doing something wrong or these fall in cases where Exact GPs are better. Furthermore, one dataset has 5k+ datapoints and I'm using 2k inducing points so I thought that would account for the differences but my other dataset has 1.8k datapoints and I'm still seeing the discrepancies leading me to think that perhaps something is wrong?
Some other notes:
Any insights would be appreciated! I still have trouble following the details of the Variational GPs so my prior that I am misusing it is high :P
Beta Was this translation helpful? Give feedback.
All reactions