Help manually getting gradients from network

I have created the following MWE.

import torch


class Net(nn.Module):
    def __init__(self, input_size, h1, output_size):
        super().__init__()
        self.fc1 = nn.Linear(input_size, h1)
        self.ac1 = nn.ReLU()
        self.fc2 = nn.Linear(h1, output_size)
        
    def forward(self, x):
        x = self.fc1(x)
        x = self.ac1(x)
        x = self.fc2(x)
        return x


net = Net(1, 16, 11)

x = torch.rand(1, requires_grad=True)
output = net(x)

#I'm trying to get the gradient dx/doutput[i]
for out in output:
    print(out.backward(retain_graph=True))


This prints None a bunch of times. Why? These should be the derivatives dx/doutput[I]

tensor.backward() doesn’t return anything, it sets/updates the .grad attribute on the autograd leaves.
If you want gradients to be returned instead of accumulated in .grad, you would want to use torch.autograd.grad.

Best regards

Thomas

1 Like
torch.autograd.grad(output, x)
torch.autograd.grad(x, output)

both do not work either. Am I making a silly mistake? I expect this to return a tensor of all the d_output/dx (I just realized I typed it wrong in my initial problem description). I want the partial derivatives of all the outputs with respect to the one input.