Is it possible to avoid releasing memory after tensor is destroyed?

Hi,

I want to integrate pytorch pybind11 bindings into my project.

My project would initialize a fixed size cuda memory as the network output. And I would pass this to torch::tensor by from_blob.

After that, I can manipulate the torch tensor in python.

However, wheneven the torch tensor is destroyed (not referenced anymore) in python, it would release the memory I want to keep.

Is is possible to avoid release memory when torch tensor is destroyed?

Hi,

When the Tensor is created with from_blob, it does not actually release the memory when it is destroyed (it does not know how it was allocated).
It is your responsibility to make sure the blob stays valid at least as long as the Tensor uses it.

1 Like

it sounds good.

I just found another great solution.

we can define an object having cuda_array_interface and pass through torch.as_tensor, and we can make sure both object share the same memory location.

1 Like