Is there a way to send location of pytorch tensor in gpu memory between docker containers and build them in different containers

I found a similar topic discussed 4 years ago. There person extracts additional information from storage using _share_cuda_() function, which gives cudaIpcMemHandle_t.

Is there a way to reconstruct Storage/Tensor using cudaIpcMemHandle_t or information extracted from _share_cuda_() function using Pytoch functional? or there is a better way to achieve same result?