Pytorch backward graph error

I tried to run the network inside this repository but actually when I need to update the different parts of the GAN from here I get a painfull error:

/home/francesco/Desktop/test/VAE-GAN/network.py:187: UserWarning: nn.init.uniform is now deprecated in favor of nn.init.uniform_.
  nn.init.uniform(m.weight,-scale,scale)
/home/francesco/Desktop/test/VAE-GAN/network.py:189: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
  nn.init.constant(m.bias, 0.0)
Epoch:0
/home/francesco/Desktop/test/test/lib/python3.8/site-packages/torch/nn/functional.py:1709: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
  warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
[W python_anomaly_mode.cpp:104] Warning: Error detected in AddmmBackward. Traceback of forward call that caused the error:
  File "main.py", line 97, in <module>
    x_tilde, disc_class, disc_layer, mus, log_variances = net(x)
  File "/home/francesco/Desktop/test/VAE-GAN/network.py", line 224, in __call__
    return super(VaeGan, self).__call__(*args, **kwargs)
  File "/home/francesco/Desktop/test/test/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/francesco/Desktop/test/VAE-GAN/network.py", line 199, in forward
    mus, log_variances = self.encoder(x)
  File "/home/francesco/Desktop/test/VAE-GAN/network.py", line 79, in __call__
    return super(Encoder, self).__call__(*args, **kwargs)
  File "/home/francesco/Desktop/test/test/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/francesco/Desktop/test/VAE-GAN/network.py", line 75, in forward
    logvar = self.l_var(ten)
  File "/home/francesco/Desktop/test/test/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/francesco/Desktop/test/test/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 94, in forward
    return F.linear(input, self.weight, self.bias)
  File "/home/francesco/Desktop/test/test/lib/python3.8/site-packages/torch/nn/functional.py", line 1753, in linear
    return torch._C._nn.linear(input, weight, bias)
 (function _print_stack)
Traceback (most recent call last):
  File "main.py", line 140, in <module>
    loss_decoder.backward(retain_graph=True)  #[p.grad.data.clamp_(-1,1) for p in net.decoder.parameters()]
  File "/home/francesco/Desktop/test/test/lib/python3.8/site-packages/torch/tensor.py", line 245, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/home/francesco/Desktop/test/test/lib/python3.8/site-packages/torch/autograd/__init__.py", line 145, in backward
    Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1024, 128]], which is output 0 of TBackward, is at version 3; expected version 2 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

I already searched online and it seems that I’m not able to do the second backward pass because the network has already updated the variables but since it’s very delicate I’m asking you a support.

Based on the linked repository it seems you are working on a GAN-like model, so you might hit this error.