# Leaf tensor but without grad after bachward

``````import torch

f = a + b
g = b + c

y = f + g
y.backward(torch.ones_like(y))

``````

output: True None None

Why f doesn’t have its grad? It is a leaf variable.

It is but it does not require gradients. So no gradients are computed for it.

1 Like

Try manually setting `f` to require gradients:

`f.requires_grad_(True)`

1 Like