I’m writing a unit test for a custom op. I’ve had no issues running the op, except when I try to run it inside a
torch.autograd.gradcheck. Here’s the test function:
x, x_len = range_tensor(100) y, y_len = range_tensor(100) x.requires_grad = True torch.autograd.gradcheck(self.custom_op, (x, y, x_len, y_len))
I get the following failure:
File "/mnt/data/code/custom_op.py", line 242, in backward D, R, X_len, Y_len = ctx.saved_tensors RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [1, 102, 102]] is at version 2; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
The backtrace points to my custom op (as expected). In the
forward function, I have:
ctx.save_for_backward(D, R, X_len, Y_len)
To make absolutely certain that I’m not accidentally making in-place modifications to any tensors, I’ve added
.clone() to each of
Y_len. I’m still getting the same exception.
Does anyone have ideas on what’s going wrong here?