Directly getting gradients

Let’s say you compute variable y as a function of variable x.

When you call y.backward(dL_dy) you’ll get the value of dL_dx in x.grad (which is dL_dy * dy_dx). If you just put a tensor full of ones instead of dL_dy you’ll get precisely the gradient you are looking for.

import torch
from torch.autograd import Variable

x = Variable(torch.ones(10), requires_grad=True)
y = x * Variable(torch.linspace(1, 10, 10), requires_grad=False)
y.backward(torch.ones(10))
print(x.grad)

produces

Variable containing:
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
[torch.FloatTensor of size 10]
9 Likes