"RuntimeError: 'inputs' to backward is not a leaf Tensor" in torch.autograd.backward while IS a leaf tensor

Hi,

I am trying some experiments on autograd and found this following small example gives a RuntimeError: One of the differentiated Tensors given as ‘inputs’ to backward is not a leaf Tensor.

However, printing x.is_leaf shows True. So why is this error thrown?


import torch
import torch.nn as nn
import torch.nn.functional as F

class TestModule(nn.Module):
    def __init__(self, input_size, output_size):
        super().__init__()
        internal_size = 5
        self.linear1 = nn.Linear(input_size, internal_size)
        self.linear2 = nn.Linear(internal_size, output_size)
        self.model = nn.Sequential(self.linear1, nn.ReLU(), self.linear2)

    def forward(self, x):
        x = self.model(x)
        return x

model = TestModule(4, 3)

x, y = torch.rand(size=(20, 4)), torch.randint(low=0, high=3, size=(20,))
x.requires_grad = True
loss = F.cross_entropy(model(x), y)

print(x.is_leaf)
torch.autograd.backward(loss, create_graph=False, inputs=x)

I cannot reproduce the error using 1.9.0a0+git6d45d7a and get a valid gradient in x, so you might need to update to the latest nightly (or the stable release in case you are using an older version).

I am using the stable torch-1.8.1 version. So this is a bug that will be fixed in 1.9.0 then. Strange that how such a simple backward pass is still bugged in today’s pytorch.