# Getting sum gradient accumulated in an `nn.Module` due to calling backward?

If there an efficient way of getting the sum gradient that was generated from a `backward()` call? Specifically for all parameters of a given `nn.Module`? Specifically, I have two loss functions calling `backward()` and accumulating gradient for each optimizer step. I’d like to monitor which one is having what proportional impact at any given time during training. Thank you!

Assuming model is a `nn.Module`, you could use `.parameters()`:

``````grads = [x.grad for x in model.parameters()]
# do things with these grads
``````
1 Like

Adding on to @richard 's answer, if two losses are backwarded in each optimizer step, it might be easier to use `register_hook` since it avoids storing previous grad values. http://pytorch.org/docs/master/autograd.html

1 Like