Modify the softmax values without causing errors

I am trying to do something similar to Instance Mapping projection, where I project the instance segmentation results (output from sigmoid function) to a softmax probability map. However, I get error saying that the gradient computation has been modified by an inplace operation. How do I solve this ? any suggestions.

Here is the code

feature_probs = F.softmax(feature_probs,1) [ 1xCxHxW]
masks = masks_logits.sigmoid() [Nx1x28x28]
boxes = [Nx4]
classes = [Nx1] 
for  mask, box, pred_class in zip(masks, boxes, classes):
     w = box[3] - box[1]
     h = box[2] - box[0]
     resize_mask = interpolate(mask, [w,h])
     feature_prob[0][pred_class+1: box[1]:box[3],box[0]:box[2] ] = resize_mask

feature_prob_normalize = F.normalize(feature_prob, p=1 ,dim=0)
loss = calculate_loss(feature_prob_normalize, target)

Error : RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 7, 256, 256]], which is output 0 of SoftmaxBackward. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later.

One easy thing you could try is to use feature_probs = F.softmax(feature_probs, 1).clone() instead. Then only the clone of the output tensor is modified, not the output tensor itself.

Best regards

Thomas

Thanks !! It works :slight_smile: