How to know if gradients are making it back to the model

I am trying many new things and losses and I always find myself questioning whether the gradients are actually making it back to the model like I want them to.

For instance, I am trying to output the parameters of a normal distribution (mu, and sigma) from my model and then I create a pytorch Normal distribution and do some operations to get a loss from that. This is one part of a larger multi part loss function so how can I be sure that my loss from the Normal distribution is actually making it to the model?

By testing? torch.autograd.gradcheck could be one way, or retain_grad to the beginning of the loss calculation you are concerned about.