A view of a leaf Variable that requires grad is being used in an in-place operation

I have the following code that works on PyTorch 1.11 and fails on 1.13

import torch

print(torch.__version__)
model = torch.jit.load("foo/ranker.pt")
model = model.eval()
x = torch.jit.load("foo/tensor_all.pt")
inputs = list(x.parameters())
for i in inputs:
    i.requires_grad = False
# first call works fine
model.forward(*inputs)
# second call fails
model.forward(*inputs)
print("ALL GOOD")

The error that it fails with is

relu
        return handle_torch_function(relu, (input,), input, inplace=inplace)
    if inplace:
        result = torch.relu_(input)
                 ~~~~~~~~~~~ <--- HERE
    else:
        result = torch.relu(input)
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.

I don’t quite understand why would I get this error, if I do the model.eval() in my code? Should not it make all operations not require gradients?

No, model.eval() will recursively change the internal self.training attribute of all registered nn.Modules to False and (some) modules will change their behavior (i.e. if the self.training attribute is used in their forward method). If you want to disable the gradient calculation, wrap your forward pass into a with torch.no_grad() or with torch.inference_mode() guard.
However, I also don’t understand why the error is raised in the first place as your code snippet doesn’t show a backward pass, but just two forward passes.

@ptrblck awesome, that helped! Thank you!

I have a follow-up question but posted it in PyTorch 1.13: RuntimeError: tensors used as indices must be long, byte or bool tensors