Hi,
I used the anomaly part as follow:
torch.autograd.set_detect_anomaly(True)
for epoch in range(max_epochs):
.
.
And added inplace=False
in ReLU
.
The following error generates:
loss_D_prior = adversarial_loss(d_p, fake_label)
File "/home/banikr/miniconda3/envs/ims37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/banikr/miniconda3/envs/ims37/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 613, in forward
return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
File "/home/banikr/miniconda3/envs/ims37/lib/python3.7/site-packages/torch/nn/functional.py", line 3083, in binary_cross_entropy
return torch._C._nn.binary_cross_entropy(input, target, weight, reduction_enum)
(Triggered internally at /opt/conda/conda-bld/pytorch_1659484809535/work/torch/csrc/autograd/python_anomaly_mode.cpp:102.)
allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
File "/home/banikr/.config/JetBrains/PyCharm2022.1/scratches/scratch_8.py", line 124, in <module>
loss_dec.backward()#retain_graph=True)
File "/home/banikr/miniconda3/envs/ims37/lib/python3.7/site-packages/torch/_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/banikr/miniconda3/envs/ims37/lib/python3.7/site-packages/torch/autograd/__init__.py", line 175, in backward
allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.