You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm interested in the variational approximation fitting a "Hadamard" multi-task GP - so in my data, each input is associated with only one single task, and I have an extra vector indicating task membership of each input/output pair.
I noted on the example given on documentation that the ApproximateGP() class supports the Hadamard GP and I just need to call an extra variable task_indices = whatever when I call the model.
However, I notice that if I keep using the conventional variational multi-task likelihood and MLL (I'm using MultitaskGaussianLikelihood() and VariationalELBO() with the corresponding num_tasks, and num_data parameters) the training will never run.
Please find below a section of the error messsage:
File ~/anaconda3/envs/my_GP/lib/python3.8/site-packages/gpytorch/likelihoods/multitask_gaussian_likelihood.py:135, in _MultitaskGaussianLikelihoodBase.shaped_noise_covar(self, shape, add_noise, interleaved, *params, **kwargs)
131 dtype, device = task_noise_covar_factor.dtype, task_noise_covar_factor.device
132 ckl_init = KroneckerProductLinearOperator
134 eye_lt = ConstantDiagLinearOperator(
--> 135 torch.ones(*shape[:-2], 1, dtype=dtype, device=device), diag_shape=shape[-2]
136 )
137 task_var_lt = task_var_lt.expand(*shape[:-2], *task_var_lt.matrix_shape) # pyre-ignore[6]
139 # to add the latent noise we exploit the fact that
140 # I \kron D_T + \sigma^2 I{NT} = I \kron (D_T + \sigma^2 I)
141 # which allows us to move the latent noise inside the task dependent noise
142 # thereby allowing exploitation of Kronecker structure in this likelihood.
Judging by my knowledge I feel this is because the multi-task likelihood and MLL do not handle the Hadamard GP-style data correctly. In a conventional example of multi-task GP each input corresponds to all values at all task outputs, so for a n-length input space the corresponding output tensor would be of shape [n, p] where p is the number of tasks. Since my data comes in Hadamard style, the output tensor only has shape [n]. I tried adding a dimension of size 1 to the output tensor but this did not solve the problem.
I have also tried replacing likelihood with GaussianLikelihood (instead of the Multitask variant) and keeping everything else the same; the model could be trained but when I switch model and likelihood to evaluation mode and make predictions (which is essential for my current project) there is a new error message which I believe is caused by the imcompatibility between non-multitask likelihood and the LMC-based model.
May I ask if anyone has any idea on this (the question is essentially just how to make a variational approximation fitting to a Hadamard GP)?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I'm interested in the variational approximation fitting a "Hadamard" multi-task GP - so in my data, each input is associated with only one single task, and I have an extra vector indicating task membership of each input/output pair.
I noted on the example given on documentation that the ApproximateGP() class supports the Hadamard GP and I just need to call an extra variable task_indices = whatever when I call the model.
However, I notice that if I keep using the conventional variational multi-task likelihood and MLL (I'm using MultitaskGaussianLikelihood() and VariationalELBO() with the corresponding num_tasks, and num_data parameters) the training will never run.
Please find below a section of the error messsage:
File ~/anaconda3/envs/my_GP/lib/python3.8/site-packages/gpytorch/likelihoods/multitask_gaussian_likelihood.py:135, in _MultitaskGaussianLikelihoodBase.shaped_noise_covar(self, shape, add_noise, interleaved, *params, **kwargs)
131 dtype, device = task_noise_covar_factor.dtype, task_noise_covar_factor.device
132 ckl_init = KroneckerProductLinearOperator
134 eye_lt = ConstantDiagLinearOperator(
--> 135 torch.ones(*shape[:-2], 1, dtype=dtype, device=device), diag_shape=shape[-2]
136 )
137 task_var_lt = task_var_lt.expand(*shape[:-2], *task_var_lt.matrix_shape) # pyre-ignore[6]
139 # to add the latent noise we exploit the fact that
140 # I \kron D_T + \sigma^2 I{NT} = I \kron (D_T + \sigma^2 I)
141 # which allows us to move the latent noise inside the task dependent noise
142 # thereby allowing exploitation of Kronecker structure in this likelihood.
Judging by my knowledge I feel this is because the multi-task likelihood and MLL do not handle the Hadamard GP-style data correctly. In a conventional example of multi-task GP each input corresponds to all values at all task outputs, so for a n-length input space the corresponding output tensor would be of shape [n, p] where p is the number of tasks. Since my data comes in Hadamard style, the output tensor only has shape [n]. I tried adding a dimension of size 1 to the output tensor but this did not solve the problem.
I have also tried replacing likelihood with GaussianLikelihood (instead of the Multitask variant) and keeping everything else the same; the model could be trained but when I switch model and likelihood to evaluation mode and make predictions (which is essential for my current project) there is a new error message which I believe is caused by the imcompatibility between non-multitask likelihood and the LMC-based model.
May I ask if anyone has any idea on this (the question is essentially just how to make a variational approximation fitting to a Hadamard GP)?
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions