MSELoss() with ReluBackward0

could you help me?
when I use criterion mse loss as mse = nn.MSELoss() ,it release this error:
i tried different solutions in discussions but i cannot solve it.

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [20, 1, 6, 32, 32]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Hi Anoud!

This is a shot in the dark, but is there any chance that you have a
torch.nn.ReLU (inplace = True) as a “layer” in your model?
If so, try changing it to inplace = False.

Best.

K. Frank

thanks for responding ,i solved it by adding ‘detach’ for mse_loss = criterion_mse(fake_img.detach(), real_img)
.

For the sake of completeness: detaching a tensor to avoid an inplace operation issue is unfortunately not a solution as described here.

sorry , I could not understand what do you mean, Just I need MseLoss values to update the loss of the generator not to update MseLoss .
g_loss = self.lambda_mse * mse_loss+dversarial_loss
g_loss.backward()