Loss grad print None while shows grad func during loss print

I print the loss function it gives the below output.

LOSS VALUE IS  tensor(92268.0312, device='cuda:0', grad_fn=<SumBackward0>)

I print the loss.shape and loss.grad, it gives below output.

LOSS shape  torch.Size([]) and grad None

Why it shows nothing in shape and None in grad while i print loss it has value as well grad_fn?
Is the gradients flow correctly or am i doing some mistake?

Hi Cbd!

This is the shape of a zero-dimensional tensor – a pure scalar. It’s
a conceptually-legitimate beast that is supported by pytorch,

Do you get grad for leaf variables (after calling loss.backward())?

Although it computes them, autograd does not retain grad for non-leaf
variables.

Best.

K. Frank