I’m creating a tensor in C++, turning it into a PyObject* with THPVariable_Wrap, packing that into a tuple, then running some Python code (user code, run on the interpreter embedded in C++) which uses that tuple. It leaks the memory.
With NumPy arrays you do PyArray_SetBaseObject(array, tuple)
so when the tuple’s ref counter goes to zero, the array data is freed.
Is there any equivalent to that in libtorch - or other way to free the tensor’s data some time later?