Question about using autodifferentiation with gpytorch model sample outputs #1542
-
I am trying to minimize a function with respect to x values. To do so, I use the code at the bottom of this post. As I understand it, I should be able to make an initial guess, set the requires_grad flag to true and run the forward pass (scores = alpha(Xsamples, model, robustness)) and then get the gradients with scores.backward() and then update my initial guess accordingly with optimizer.step. However, when I try running this I get the following error 'RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn' which I don't understand because I have set my initial guess to requires gradient. I think this is because the sample() function I use to calculate 'scores' applies a torch.no_grad() context manager before computing its output. I tried looking on forums for help but most of the answers were regarding training neural networks so their fixes did not work in this case. Any guidance on this would be greatly appreciated, thank you.
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
What is |
Beta Was this translation helpful? Give feedback.
What is
alpha
? Also, you should makeXsamples
a PyTorch parameter:Xsamples = torch.nn.Parameter(torch.randn(1, 2))