Keep pytorch tensor in GPU memory after program exit

I would like to keep a Pytorch tensor in GPU memory after the program is gone.

The use case is I have some process that uses Torch to move stuff to GPU. Then I want another process to read those memory regions into torch.Tensor. Ideally I would just pass a list of device ptrs and memory sizes/types between those two processes.

This sounds like a security concern and I don’t think it would be possible. Each process uses a different memory space and its address space is local. This is why “out of bounds” errors (segmentation faults) are raised if you are trying to access memory outside of the valid memory space.
You could try to use CUDA IPC instead or any other IPC approach which would fit your use case.