I occasionally like to inspect the gradients of a model but since I use amp I do not know anymore how to do that. Can you kindly help me how to obtain the scaled gradients?
I usually do:
grads = []
for p in model.parameters():
if (p.grad is None) or (not p.requires_grad):
continue
else:
grads.append(p.grad.detach())
With amp, I do the following to train the model:
self._scaler.scale(loss).backward()
assert parameters is not None
self._scaler.unscale_(optimizer) # unscale the gradients
torch.nn.utils.clip_grad_norm_(parameters, clip_value, norm_type=norm_type)
self._scaler.step(optimizer)
self._scaler.update()
Can somebody tell me how I can get the gradients of these model parameters in this case after I have scaled them? If I iterate again over parameters
can I again get each gradient? Will it be scaled?