Gradient is None after backward

Hey!

I recently started with Pytorch and right now I am facing a problem with computing the gradients.

I have a list of Tensors (L) with required_grad = True, that is stucked in one Tensor: T=torch.stack(L,dim=0).

So, when I am trying to compute the gradients from all elements from the Tensor T with out.backward() , T[0].grad is always None.

How can I get the gradients in this case?

Where does out come from in your example?

Where exactly is out coming from?

If u r training a model then I’m assuming that out is the output of ur model which in that case the code is supposed to be:

loss.backward()

Not

out.backward()