Crash launching sequentially the same model

I am using a model with libtorch and everything work fine when I launch the first time.
The module libtorch is inside a c++ class and I can create and delete my object.
The class has a module where I load the model, when I delete I am not sure how to unload the module but I am using CUDACachingAllocator::emptyCache().

If I create/delete my object multiple time in a forloop I have this. behavior:

loop 1 - > load / unload success
loop 2 -> load failed with error
loop 3 -> load / unload success
loop 4 -> load failed with error
loop 5 -> load / unload success …

I am sure its because I am not doing the right thing when I release the module, but I am not able to figure out, any idea why I have this error.

[ERROR] CUDA error: resource not mapped (launch_kernel at /Volumes/torch_cuda/darwin/src/aten/src/ATen/native/cuda/CUDALoops.cuh:217)
frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 68 (0x112bf2ab4 in libc10.dylib)
frame #1: void at::native::gpu_kernel_impl<__nv_hdl_wrapper_t<false, false, __nv_dl_tag<void (*)(at::TensorIterator&, c10::Scalar), &(at::native::fill_kernel_cuda(at::TensorIterator&, c10::Scalar)), 1u>, unsigned char (), unsigned char> >(at::TensorIterator&, __nv_hdl_wrapper_t<false, false, __nv_dl_tag<void (*)(at::TensorIterator&, c10::Scalar), &(at::native::fill_kernel_cuda(at::TensorIterator&, c10::Scalar)), 1u>, unsigned char (), unsigned char> const&) + 8119 (0x101e08ab7 in libtorch_cuda.dylib)
frame #2: at::native::fill_kernel_cuda(at::TensorIterator&, c10::Scalar) + 1319 (0x101e05177 in libtorch_cuda.dylib)
frame #3: void at::native::DispatchStub<void (*)

Could you post the code you are using to load and unload the model?
An executable code snippet to reproduce this issue would be even better. :slight_smile: