Changing in place operation


I am getting an error: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1, 5]], which is output 0 of SoftmaxBackward, is at version 1; expected version 0 instead.

My operation on the tensor of dimensions (1,5) is this:

probs = net(*input_var)
probs[-1] = 0.
probs = probs/torch.sum(probs)

To remove the in place operation, I replaced it with:

probs = net(*input_var)
probs_new = probs.clone()
probs_new[-1] = 0.
probs_new = probs_new/torch.sum(probs_new)

But this still gives the error. What is the correct way to avoid the in place operation?


Do you get the exact same error? Is it still an output of Softmax?
maybe something inside your net changes the output of the Softmax inplace before returning it?

Seems like I forgot to do the same at one more place that piece of code was used. It works now. Thanks.

1 Like