def __init__(self, input_size, h1, output_size):
self.fc1 = nn.Linear(input_size, h1)
self.ac1 = nn.ReLU()
self.fc2 = nn.Linear(h1, output_size)
def forward(self, x):
x = self.fc1(x)
x = self.ac1(x)
x = self.fc2(x)
net = Net(1, 16, 11)
x = torch.rand(1, requires_grad=True)
output = net(x)
#I'm trying to get the gradient dx/doutput[i]
for out in output:
This prints None a bunch of times. Why? These should be the derivatives dx/doutput[I]
tensor.backward() doesn’t return anything, it sets/updates the .grad attribute on the autograd leaves.
If you want gradients to be returned instead of accumulated in .grad, you would want to use torch.autograd.grad.
both do not work either. Am I making a silly mistake? I expect this to return a tensor of all the d_output/dx (I just realized I typed it wrong in my initial problem description). I want the partial derivatives of all the outputs with respect to the one input.