About memory overloading during test time

I want to spot a better deep neural network architecture for a custom Siamese object tracker which is using a correlation filter. So I put the training and testing function in a loop (new architectures are designed, trained, and tested by each iteration).
After some iteration, the RAM and SWAP memory begins to fill during testing, and then the process is killed by Linux. How can I solve this problem?

Based on the description it seems you are storing data on the host in each iteration.
This could be the case e.g. if you are training on the CPU and are storing tensors with their computation graph (e.g. via loss_total += loss). In such a case, detach() the losses before accumulating them or call item() on them.

thank you @ptrblck. both procedures training and testing run on the GPU and I am not storing any unusable data during testing and training also the garbage collector is called manually during the loop.
I wonder why my system fills the memory(RAM and SWAP) capacity.

That’s hard to tell without seeing the code. Could you try to create a minimal, executable code snippet which would reproduce the increase in host memory usage?