Error:

```
Traceback (most recent call last):
File "/home/banikr/.config/JetBrains/PyCharm2022.1/scratches/scratch_8.py", line 124, in <module>
loss_dec.backward(retain_graph=True)
File "/home/banikr/miniconda3/envs/ims37/lib/python3.7/site-packages/torch/_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/banikr/miniconda3/envs/ims37/lib/python3.7/site-packages/torch/autograd/__init__.py", line 175, in backward
allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
```

The error comes from:

```
for epoch in range(max_epochs):
for batch_idx, data in tqdm(enumerate(train_loader), total=len(train_loader),
desc='Train epoch=%d' % epoch, ncols=100, leave=False): # epoch will be updtd by train_epoch()
x = Variable(data, requires_grad=False).float().to(device)
x_avg = x_avg + torch.mean(x, axis=0)
# +------------------------------+
# | generator loss |
# +------------------------------+
x_hat, elbo_loss = net_g(x)
x_hat_avg = x_hat_avg + torch.mean(x_hat, axis=0)
# z = net_g.decoder()
_, z_p, _, _ = net_g.encoder(x)
x_p = net_g.decoder(z_p)
# +----------------------------------+
# | discriminator loss |
# +----------------------------------+
d = net_D(x)
d_hat = net_D(x_hat)
d_p = net_D(x_p)
real_label = Variable(Tensor(x.size(0), 1).fill_(1.0), requires_grad=False).to(device)
fake_label = Variable(Tensor(x.size(0), 1).fill_(0.0), requires_grad=False).to(device)
loss_D_real = adversarial_loss(d, real_label)
loss_D_fake = adversarial_loss(d_hat, fake_label)
loss_D_prior = adversarial_loss(d_p, fake_label)
loss_gan = loss_D_real + loss_D_fake + loss_D_prior
# print(loss_gan)
optimizer_D.zero_grad()
loss_gan.backward(retain_graph=True)
optimizer_D.step()
# +----------------------------+
# | decoder loss |
# +----------------------------+
rec_loss = ((net_D(x_hat) - net_D(x)) ** 2).mean()
print(rec_loss)
loss_dec = gamma * rec_loss - loss_gan # <<<< error here
optimizer_d.zero_grad()
loss_dec.backward(retain_graph=True)
optimizer_d.step()
```

Some of the solutions to similar errors are changing the dropout layer `inplace`

(which I am not using here).

`adversarial_loss`

is:

```
adversarial_loss = torch.nn.BCELoss().to(device)
```

Any help in troubleshooting is much appreciated.