Equivalent to PyArray_SetBaseObject for tensors, to free memory

I’m creating a tensor in C++, turning it into a PyObject* with THPVariable_Wrap, packing that into a tuple, then running some Python code (user code, run on the interpreter embedded in C++) which uses that tuple. It leaks the memory.

With NumPy arrays you do PyArray_SetBaseObject(array, tuple) so when the tuple’s ref counter goes to zero, the array data is freed.

Is there any equivalent to that in libtorch - or other way to free the tensor’s data some time later?

Playing around, this solves 99% of the leaked tensor data:

…sometime later, in another thread far far away…

auto tensor = THPVariable_Unpack(PyObject*)
auto ptr = tensor.unsafeReleaseIntrusivePtr();
ptr.get()->release_resources();