Setting requires_grad in autograd Variable

I am looking at the example here: https://pytorch.org/docs/_modules/torch/nn/modules/loss.html . I noticed that the input data is passed as:

input = autograd.Variable(torch.randn(3, 5), requires_grad=True)

Shouldn’t requires_grad be set to False, since this is the input data and not the weights.

Here is the example:

>>> m = nn.LogSoftmax()
>>> loss = nn.NLLLoss()
>>> # input is of size nBatch x nClasses = 3 x 5
>>> input = autograd.Variable(torch.randn(3, 5), requires_grad=True)
>>> # each element in target has to have 0 <= value < nclasses
>>> target = autograd.Variable(torch.LongTensor([1, 0, 4]))
>>> output = loss(m(input), target)
>>> output.backward()

With most NN code, you don’t want to set requires_grad=True unless you explicitly want the gradient w.r.t. to your input. In this example, however, requires_grad=True is necessary because otherwise there would be no gradients to compute, since there are no model parameters.

2 Likes

I see. So in this example that makes sense.

By the way does requires_grad default to True? I do not see it in the documentation.

It defaults to False

1 Like