I have the following code that works on PyTorch 1.11 and fails on 1.13
import torch
print(torch.__version__)
model = torch.jit.load("foo/ranker.pt")
model = model.eval()
x = torch.jit.load("foo/tensor_all.pt")
inputs = list(x.parameters())
for i in inputs:
i.requires_grad = False
# first call works fine
model.forward(*inputs)
# second call fails
model.forward(*inputs)
print("ALL GOOD")
The error that it fails with is
relu
return handle_torch_function(relu, (input,), input, inplace=inplace)
if inplace:
result = torch.relu_(input)
~~~~~~~~~~~ <--- HERE
else:
result = torch.relu(input)
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
I don’t quite understand why would I get this error, if I do the model.eval()
in my code? Should not it make all operations not require gradients?