Is this an effective way to compute gradient norm after each epoch?

Is this code an effective way to compute the global L2 gradient norm of the model after each training epoch : -

current_gradient_norm = nn.utils.clip_grad_norm_(model.parameters(), max_norm=float('inf'), norm_type=2.0)

I do not want to clip the gradients, I only want to find and store the entire model’s gradient norms after each epoch