How to use module as feature extractor only in forward pass?

I found a line of code:

out_a, out_p, out_n = model(data_a), model(data_p), model(data_n)    

in:https://github.com/liorshk/facenet_pytorch/blob/master/train_triplet.py

as you can see that the “forward()” is invoked multiple times before “backward()”,In my test,the gpu consumption will increase accordingly,so GPU memory leaking happens,Questions are:

1,How to solve this problem?
2,Is it possible use model as a pure feature extrator these three consecutive invoking(with some modifiction,of course),then in fourth invoking,use it as “forward()”,if “Yes”,how to implement it?

The memory is increasing as each forward pass creates intermediate variables, which are used to calculate the gradients. So it shouldn’t be a memory leak.

If you just need the features without backpropagating through your model, you should wrap your code in with torch.no_grad(): to avoid storing the intermediates.

Could you explain the second point a bit? I’m not sure, what the fourth pass would be.