History of gradients in pytorch

Hi
I am doing some task very similar to translation

there is two nested for loop in picture below
in j = 0
I assigned some thing to h
and doing some thing in inner loop
then
in in j = 1
I assigned some thing else to h , and …

doesn’t j=1 step remove the history of gradients for j=0 (for back propagation time )??

if it is the case , how can i fix it ?

Hi,

When you do h = xxx in python, you associate the python object xxx to the name h. And the old python object that h was pointing to is discarded.

That being said. The gradient computation for the different Tensors will still be computed properly. What is the exact issue you’re seeing here?

1 Like