Real time inference - Is it possible to pass a GPU pointer to PyTorch model?

Context: Realtime acquisition of images, processed on a GPU. PyTorch model performs segmentation on latest image. I want to avoid copies from device to cpu (save processed image) and copy from cpu to device for segmentation using Pytorch model.

Question: Is there any way to pass a GPU pointer to PyTorch model ? I’d like to avoid creating a separate process looking for any new image in a folder and processing it. Not really elegant and too slow.

Alternate question: Is there anyway to create a video stream from processed images ?

You should be able to use torch::from_blob as long as the GPU array is kept alive (otherwise clone() the tensor if you are going out of scope).

Thanks a lot for your answer, I’ll definitely look into that !

deleted since I started a new topic