I am getting an error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 5]], which is output 0 of SoftmaxBackward, is at version 1; expected version 0 instead.
My operation on the tensor of dimensions (1,5) is this:
probs = net(*input_var) probs[-1] = 0. probs = probs/torch.sum(probs)
To remove the in place operation, I replaced it with:
probs = net(*input_var) probs_new = probs.clone() probs_new[-1] = 0. probs_new = probs_new/torch.sum(probs_new)
But this still gives the error. What is the correct way to avoid the in place operation?