I know that in pytorch 1.5
clone() can preserve formats and therefore we can send non-contiguous tensors between devices.
I wonder, what is the case for
copy_()? can we send non-contagious tensors with it?
If not, is there any suggested workaround for avoiding copy?
a = torch.randn(10,1, device="cuda:1").share_memory_() b = torch.randn(10,2, device="cuda:0") b = torch.transpose(b, 0,1) a.copy_(b) # is it OK?
In the example above we want to avoid using
clone() to avoid creating a new tensor and moving it to shared memory.