I want to integrate pytorch pybind11 bindings into my project.
My project would initialize a fixed size cuda memory as the network output. And I would pass this to torch::tensor by from_blob.
After that, I can manipulate the torch tensor in python.
However, wheneven the torch tensor is destroyed (not referenced anymore) in python, it would release the memory I want to keep.
Is is possible to avoid release memory when torch tensor is destroyed?