When using libtorch in my project, I find so much cuda memory reserved by Pytorch, which lead to CUDA out of memory.
what(): CUDA out of memory. Tried to allocate 1010.00 MiB (GPU 0; 10.92 GiB total capacity; 8.69 GiB already allocated; 978.38 MiB free; 8.79 GiB reserved in total by PyTorch)
Is it possible to disable the function? Thanks