Will computation graph be overrode when forward() was invoke multiple times before backward()?

I found a line of code:

out_a, out_p, out_n = model(data_a), model(data_p), model(data_n)    

in:https://github.com/liorshk/facenet_pytorch/blob/master/train_triplet.py

as you can see that the “forward()” is invoked multiple times before “backward()”,question are:

1,Will next invoking override previous computation graph?

2,In my test,the gpu consumption will increase accordingly,so even overriding happened,gpu memory leaking happens,how to solve this problem?

The computational graph is not linked to the model, a new one est created each time you perform a forward pass. So no it is not overriden and yes the memory usage increase as you create 3 computational graphs (each containing its own intermediary results needed for the backward pass).