Compute gradient of output w.r.t parameters

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5, 1)
        self.conv2 = nn.Conv2d(20, 50, 5, 1)
        self.fc1 = nn.Linear(4 * 4 * 50, 500)
        self.fc2 = nn.Linear(500, 10)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        x = F.max_pool2d(x, 2, 2)
        x = F.relu(self.conv2(x))
        x = F.max_pool2d(x, 2, 2)
        x = x.view(-1, 4 * 4 * 50)
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return F.log_softmax(x, dim=1)
This is the structure of Neural Network

And I want to compute the gradient of output F w.r.t. conv1.weight, conv2.weight,fc1.weight..
So I do
output.backward(torch.ones_like(output))
Then get conv1.weight.grad, conv2.weight.grad
Is it correct?

Hi,

Yes this is correct.
Be careful though depending on the size of your output. If your output has more than one element, you will get the sum of the gradients of each element of the output.