Accessing gradients directly from the loss

Is there a way to access the gradients directly from the loss, without using an optimizer?

For example,

inputs, labels = inputs.half().to(device=self.args.device, non_blocking=True),, non_blocking=True)
outputs = self.model(inputs)
minibatch_loss = self.criterion(outputs, labels)

Can I access the gradients from


somehow ?


If you use autograd.backward(), it populates the .grad field of all the leaf Tensors (all tensors created with requires_grad=True and nn.Parameter).
So since you use an nn.Module, you can do:

for p in self.model.parameters():

Otherwise, you can use autograd.grad to get a list of Tensor for your inputs by calling grads = autograd.grad(minibatch_loss, self.model.parameters()).