Does iteratively executing a fuction on training tensor affect the backward?

I wrote following code.

for i in range(5000): 
    tensor = tanh(tensor)
    activation = []
    loss = loss_function()

model2 is a trained model and it’s parameters are freezed. tensor is input of model2 and set requires_grad=True, which means I want to train tensor. The input of model2 should be in [-1,1]. So before executing model2, I uses tanh to transform tensor. But Does it stacks tanh many layers. e.g. tanh(tanh(tanh(…(tensor)…)) ?

No, in each iteration only a single tanh will be applied to tensor.
However, since you are using retain_graph=True, the backward call will backpropagate through all iterations, so make sure this is the desired workflow.

Thanks a lot. I don’t want the backward to backpropagate through all iterations. But when I execute backward without retain_graph=False, I got following error.

trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). specify retain_graph=true if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

How do I avoid this error?

Your code has a recursion since you are reusing tensor, so the error is expected.
If you don’t want to backpropagate through all previous iterations you would need to .detach() the tensor before passing it to the tanh in the new iteration.