Is hidden_state in place operation as shown in this example?

In the following example, the variable hidden is changed every time step, is this assignment in place operation? If it is, does calling backward() yield correct gradient?
Thank you!

def train(category_tensor, line_tensor):
    hidden = rnn.init_hidden()
    for i in range(line_tensor.size()[0]):
        output, hidden = rnn(line_tensor[i], hidden)

    loss = criterion(output, category_tensor)


    return output,[0]

When you do hidden = this is not an inplace operation. It only change the python object which is associated with the name “hidden”.

Also the autograd makes sure the inplace operations are valid and will raise an error if useful data are deleted.

1 Like

Thanks. Just to make it clear. So these objects, hidden, are still in memory when autograd is called?

It depends.
If they are needed to compute gradients, yes the autograd will keep them in memory.
If they are not needed to compute gradients, they will be freed when you associate a new Tensor to the python variable.