Cuda programming using Pytorch

I am using a loop, which uses three functions, these three function call another three function. So i run this logic in GPU, i got an error, “out of memory”. please tell me how do i manage the memory, by clearing the unwanted parameters after use and present in the GPU.

You can delete Python objects via del object and allow PyTorch to reuse the memory.
Note however, that the underlying will only be freed if no references point to the deleted object anymore.