Hi ,
I read your message. I would like to know if the exposed functionality of flushing memory is for C++ Libtorch developers . I am using Libtorch C++ and I cannot find a way to release ALL the CUDA GPU Memory used by a torch::nn::Module . Here I explained better an example : https://discuss.pytorch.org/t/release-all-cuda-gpu-memory-using-libtorch-c/108303
Thanks in advance .