Replies: 3 comments 3 replies
-
How exactly are you doing the un-transformation? We have some functionality for learnable outcome transforms in BoTorch: Re warping: If that is useful, we also have learnable input warping (of the features), e.g. https://github.com/pytorch/botorch/blob/master/botorch/models/transforms/input.py#L570 |
Beta Was this translation helpful? Give feedback.
-
One idea is an inverse softplus transform of the observations |
Beta Was this translation helpful? Give feedback.
-
@dme65 has also implemented a Yeo-Johnson power transform for Ax that seems to work quite well. The MSE is still not great on the original scale, but on the transformed scale this can really help with the modeling. Since "relative goodness" is usually good enough for our BayesOpt approaches, this works well. |
Beta Was this translation helpful? Give feedback.
-
Hi friends, wanted to get your advised on warped GP which uses a monotonic transformation to bring GP samples to the actual interval that observations fall in. Wanted to see whether this is sth that GPytroch could support? In my case, observations are real positive values--I tried using log transform of the observations and does training on them--However, interpretation of the predictions in the original space gives me very off results and consequently super high MSE.
Beta Was this translation helpful? Give feedback.
All reactions