RuntimeError: one of the variables needed for gradient (torch==1.3.0) computation has been modified by an inplace operation:

Hi, I used CrossEntropyLoss() for segmentation but I have this error. Please help me
"“RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 512, 1, 1]], which is output 0 of ReluBackward1, is at version 6; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).”

I can’t loss.backward(),

j=0
criterion1 = nn.CrossEntropyLoss()
best_loss = 1000000000
for epoch in range(2):

model.train()

for i, dat in enumerate(train_generator):

  j+=1
  features, masks,labels = dat

  features = features.to(device).float()

  masks = masks.to(device)

  

  labels = labels.to(device)

  optimizer.zero_grad()  

  ### FORWARD AND BACK PROP

  features = features.permute(0, 3, 1,2)

  #print(features.shape)

  logits_labels,logits_masks = model((features))

  

  #masks=masks[...,0].squeeze()

  masks = masks.squeeze_()

  #Loss

  loss1 = criterion1(logits_masks,masks)

  loss2 = criterion1(logits_labels,labels)

  loss=loss1+loss2

  loss.backward()

  running_loss += loss.item()

    ### UPDATE MODEL PARAMETERS

  optimizer.step()

Within your forward pass do you have torch.nn.ReLU(inplace=True)? Set it to torch.nn.ReLU(inplace=False)

Thank you so much. It solved, but Why should (inplace = True) be false?

Thank you so much. It solved, but Why should (inplace = True) be false?

Because you’re trying to compute a gradient. You can’t compute gradients on Tensor which have been modified via an in-place operation, so make sure it’s set to False.

There’s a nice dicussion about it on the forums here: What is `in-place operation`? - #15 by Alex_Fann

Does it have an effect on loss? I mean, isn’t loss too much?

On the output of the loss, No. On the calculation of the gradient of the loss with respect the parameters, yes.

Thank you so mutch. :rose: