Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix issue with pytorch #21

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Commits on Feb 2, 2021

  1. Fix issue with pytorch

    Fixes this issue: pytorch/pytorch#46820
    I came across this when I was running the code with pytorch==1.7, getting this error message (and this change would fix the issue):
    """
    /home/iman/projs/NVAE/distributions.py:31: UserWarning: Output 0 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using `unsafe_` version of the function that produced this view or don't modify this view inplace. (Triggered internally at  /pytorch/torch/csrc/autograd/variable.cpp:491.)
      self.mu = soft_clamp5(mu)
    /home/iman/projs/NVAE/distributions.py:32: UserWarning: Output 1 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using `unsafe_` version of the function that produced this view or don't modify this view inplace. (Triggered internally at  /pytorch/torch/csrc/autograd/variable.cpp:491.)
      log_sigma = soft_clamp5(log_sigma)
    Traceback (most recent call last):
      File "train.py", line 415, in <module>
        init_processes(0, size, main, args)
      File "train.py", line 281, in init_processes
        fn(args)
      File "train.py", line 92, in main
        train_nelbo, global_step = train(train_queue, model, cnn_optimizer, grad_scalar, global_step, warmup_iters, writer, logging)
      File "train.py", line 164, in train
        logits, log_q, log_p, kl_all, kl_diag = model(x)
      File "/home/iman/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
        result = self.forward(*input, **kwargs)
      File "/home/iman/projs/NVAE/model.py", line 358, in forward
        dist = Normal(mu_q, log_sig_q)   # for the first approx. posterior
      File "/home/iman/projs/NVAE/distributions.py", line 32, in __init__
        log_sigma = soft_clamp5(log_sigma)
    RuntimeError: The following operation failed in the TorchScript interpreter.
    Traceback of TorchScript (most recent call last):
      File "/home/iman/projs/NVAE/distributions.py", line 19, in soft_clamp5
        # xx = 5.0*torch.tanh( x / 5.0)
        # return  5.0*torch.tanh( x / 5.0)
        return x.div_(5.).tanh_().mul(5.)    #  5. * torch.tanh(x / 5.) <--> soft differentiable clamp between [-5, 5]
               ~~~~~~ <--- HERE
    RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/autograd/variable.cpp":363, please report a bug to PyTorch.
    """
    ImanHosseini authored Feb 2, 2021
    Configuration menu
    Copy the full SHA
    b576900 View commit details
    Browse the repository at this point in the history