When to set up autograd in nn module

When we define the forward process, do we need to setup autograd for any tensors, such as hidden/cell state?
For example,

    h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_()
    c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_()

In some tutorials, I found out from the beginning to the end not even one torch is set as requires_grad.
Is it because in nn.LSTM module it already handles this problem?

Hi,

This is because only the Tensors for which you want the .grad field to be populated needs the user to call requires_grad_().
When working with nn, this is done automatically for nn.Parameters.
So if you use nn.Parameters, you never need to call this explicitely.
Unless you want extra gradients to be computed for things that are not nn.Parameters.

Thank you for your help!