Hi Aerin! Nice to see you here
I found this post with an answer by @albanD - Why do we need to set the gradients manually to zero in pytorch?
It explains the decision to accumulate gradients when .backward()
is performed. I assume the same argument applies for .gradient()
.