Question about training

I try to print loss.backward() then I got none,but the network training is still running.
Why ?

1 Like

tensor.backward() won’t return anything. Instead the .grad attributes of all parameters used to compute the loss will be populated and accumulated.

1 Like

so should i print param_name.grad?

1 Like

it will compute backpropogation. if you want to compute manually use torch.no_grad()

Yes, if you want to print out all gradients, you could use:

print(model.layer.param.grad)
# or to print all gradients
for name, param in model.named_parameters():
    print(name, param.grad)
1 Like

Thank you very much.

thank you :handshake: @ptrblck