Is there a way to release GPU memory in libtorch?

I encapsulate model loading and forward calculating into a class using libtorch,and want to release the gpu memory(including model) while destroy the class.

I have tried c10::cuda::CUDACachingAllocator::emptyCache(), but it doesn’t seem to be working.

I have the same question. @cyanM did you find any solution?
c10::cuda::CUDACachingAllocator::emptyCache() released some GPU memories for me, but not all of them.

I have the same problem .Any solution ?
Thanks .

check this How to effectively release a Tensor in Pytorch? - #2 by ptrblck