The output gradient w.r.t the input

I am using 0.4.0 version of pytorch.

To get the output gradient w.r.t the input, I used the following code.

m = nn.Linear(20, 30)
input = torch.randn(128, 20)
input.requires_grad = True
output = m(input).sum()
output.backward()
print(input.grad.data)

Am I correct ?

Thanks!

Looks good to me, but the most idiomatic way have input require gradients seems to be

input = torch.randn(128, 20, requires_grad=True)

Best regards

Thomas

Thanks for your reply.

Now, I can perform it with only single data input, but the process is too slow.

Is there a way to perform it with data batch at once ?

Have the data minibatch as your input?

Best regards

Thomas

I am sorry for late response.

My question is that there is a way to apply .backward() when the output dimension is larger than one and each dimension corresponds to each input data in a batch data input.

You can feed a tensor backward as in x.backward(weight)’ this is mathematically equivalent to (x * weight).sum().backward(). There isn’t anything that let’s you get arbitrary derivatives of vectors (i.e. Jacobians).

Best regards

Thomas