Loading data to torch::tensor from GPU Address


I’ve got a function that provides an address/a pointer to the currently buffered data within the GPU memory. Is there an easy way to load/move data to a torch::tensor object without duplicating it on GPU?

I know I can use torch::from_blob(); to create a new tensor and just read through the bytes, however is there a more convenient/efficient way of doing this?

Best Regards

1 Like

Any news on this?

I believe am trying to do the same: given a pointer to GPU I want to build a tensor (without copying anything) to do an inference in a model. My steps are:

  1. Get the pointer.
  2. new_tensor = torch:from_blob(pointer)
  3. model.inference(new_tensor)

Is this efficient? Do we copy a tensor in the step 2?