You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My data dimensions are [B, N, D], the first dimension is batchsize, the second dimension is the sequence length in the sample, and the third dimension is the feature channel.
Before feeding into the Approximate Gaussian process, I flatten the first and second dimensions into [BN, D] and feed into the Gaussian process. The output of my Gaussian process is [BN, T], where T is the number of tasks. But is there a problem in this case? So I have 2 issues:
Because the covariance matrix between all samples in a mini_batch is calculated, but in fact there is no relationship between each of my samples, and there is no need to calculate the covariance between different samples.
Because of this problem, I can only perform a for loop according to the batch dimension and use a defined Gaussian process to process all samples one by one. Is this reasonable?
The text was updated successfully, but these errors were encountered:
Are you using a multitask model? And why are you flattening the data? There should be no need to flatten the data; GPyTorch can compute batches of covariance matrices.
A contained reproducible code example would also be helpful.
Are you using a multitask model? And why are you flattening the data? There should be no need to flatten the data; GPyTorch can compute batches of covariance matrices.
A contained reproducible code example would also be helpful.
I did use a multitask model.The reason I flattened the input data is because if I don't flatten it, I get a dimensionality error.
This is my code for defining GP. Because I didn't find any examples of unflattened data in the tutorials. All I saw were two-dimensional data such as [B,D] input into the Gaussian process, and I didn't see any examples of [B,N,D]
My data dimensions are [B, N, D], the first dimension is batchsize, the second dimension is the sequence length in the sample, and the third dimension is the feature channel.
Before feeding into the Approximate Gaussian process, I flatten the first and second dimensions into [BN, D] and feed into the Gaussian process. The output of my Gaussian process is [BN, T], where T is the number of tasks. But is there a problem in this case? So I have 2 issues:
The text was updated successfully, but these errors were encountered: