Why is grad None

In the following program, b.grad is None but a.grad is not None. Why is that?

    a = torch.tensor([[[[1.0]]]]).requires_grad_()     
    b = torch.nn.functional.interpolate(a, size=(2, 2))
    loss = torch.sum(b)           
    loss.backward()               
    b.grad                        
    a.grad

Hi @Hovnatan_Karapetyan,

b grad is None since it is a non-leaf Tensor.
In your example:

>>> a.is_leaf
True
>>> b.is_leaf
False

The gradient will be calculated in the during the backward phase (since it is needed by a) but it won’t kept in grad.

@Hovnatan_Karapetyan
Also you can use:

b.retain_grad()

to keep the grad.
Please find more info here

As @spanev said, the grad won’t be kept by default, so you might want to call b.retain_grad() before calling backward.

1 Like