Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem described in this issue: pytorch/pytorch#46820 (this means NVAE don't work for at least pytorch==1.7 and beyond)
I have tested it with pytorch==1.7, so this 1 change is all that is needed to add support for 1.7, which brings many improvements as compared to 1.6 (like windows support for distributed learning)
I came across this when I was running the code with pytorch==1.7, getting this error message (and this change would fix the problem):
"""
/home/iman/projs/NVAE/distributions.py:31: UserWarning: Output 0 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using
unsafe_
version of the function that produced this view or don't modify this view inplace. (Triggered internally at /pytorch/torch/csrc/autograd/variable.cpp:491.)self.mu = soft_clamp5(mu)
/home/iman/projs/NVAE/distributions.py:32: UserWarning: Output 1 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using
unsafe_
version of the function that produced this view or don't modify this view inplace. (Triggered internally at /pytorch/torch/csrc/autograd/variable.cpp:491.)log_sigma = soft_clamp5(log_sigma)
Traceback (most recent call last):
File "train.py", line 415, in
init_processes(0, size, main, args)
File "train.py", line 281, in init_processes
fn(args)
File "train.py", line 92, in main
train_nelbo, global_step = train(train_queue, model, cnn_optimizer, grad_scalar, global_step, warmup_iters, writer, logging)
File "train.py", line 164, in train
logits, log_q, log_p, kl_all, kl_diag = model(x)
File "/home/iman/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in call_impl
result = self.forward(input, **kwargs)
File "/home/iman/projs/NVAE/model.py", line 358, in forward
dist = Normal(mu_q, log_sig_q) # for the first approx. posterior
File "/home/iman/projs/NVAE/distributions.py", line 32, in init
log_sigma = soft_clamp5(log_sigma)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
File "/home/iman/projs/NVAE/distributions.py", line 19, in soft_clamp5
# xx = 5.0torch.tanh( x / 5.0)
# return 5.0*torch.tanh( x / 5.0)
return x.div(5.).tanh_().mul(5.) # 5. * torch.tanh(x / 5.) <--> soft differentiable clamp between [-5, 5]
~~~~~~ <--- HERE
RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/autograd/variable.cpp":363, please report a bug to PyTorch.
"""