Hi, I am facing the following error while trying to train a model that is using backward operation. Before, the code was working fine but suddenly I started getting this error:
I have already tried replacing inpalce value from True to β LeakyReLU(0.2, inplace=False),
Could anyone suggest me some solution?
Details of the error:
RuntimeError Traceback (most recent call last)
Cell In[13], line 2
1 epochs=1
----> 2 train(epochs)
Cell In[12], line 9, in train(max_epochs)
7 for haze_images, dehaze_images, in train_loader:
8 unet_loss, dis_loss, mse, ssim = DUNet.process(haze_images.cuda(), dehaze_images.cuda())
----> 9 DUNet.backward(unet_loss.cuda(), dis_loss.cuda())
10 print('Epoch: '+str(epoch+1)+ β || Batch: '+str(i)+ " || unet loss: "+str(unet_loss.cpu().item()) + " || dis loss: "+str(dis_loss.cpu().item()) + " || mse: β+str(mse.cpu().item()) + " | ssim:β + str(ssim.cpu().item()) )
11 mse_epoch = mse_epoch + mse.cpu().item()
Cell In[6], line 110, in DU_Net.backward(self, unet_loss, dis_loss)
107 self.dis_optimizer.step()
109 if unet_loss is not None:
β 110 unet_loss.backward()
111 self.unet_optimizer.step()
File ~\anaconda3\envs\myenv\lib\site-packages\torch\tensor.py:221, in Tensor.backward(self, gradient, retain_graph, create_graph)
213 if type(self) is not Tensor and has_torch_function(relevant_args):
214 return handle_torch_function(
215 Tensor.backward,
216 relevant_args,
(β¦)
219 retain_graph=retain_graph,
220 create_graph=create_graph)
β 221 torch.autograd.backward(self, gradient, retain_graph, create_graph)
File ~\anaconda3\envs\myenv\lib\site-packages\torch\autograd_init_.py:130, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
127 if retain_graph is None:
128 retain_graph = create_graph
β 130 Variable.execution_engine.run_backward(
131 tensors, grad_tensors, retain_graph, create_graph,
132 allow_unreachable=True)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 512, 4, 4]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).