Model.zero_grad only fill the grad of parameters to 0

@ypxie autograd by default frees the intermediate gradients that are not needed anymore, so that the memory usage is minimal.
If you want to inspect internal gradients, you can use hooks, as explained in this post.
But if you don’t want to free the gradients, you can pass the retain_variables=True to backward, as explained in the docs