GPU shared memory for link of processes

Hi! I want to ask if you have plans to support GPU tensor shared memory for the link of processes (when a consumer sends tensor to next consumer) without copying of tensor?

Could you explain your use case a bit? Are you planning to share data between different processes or in a single application somehow?

Yes. Each process runs specific model on the same batch of images, and sometimes there is a sequence of processes, so one process sends shared tensors to another process, at this place everything crashes and we need to clone this batch