In my code, I write:
But the output is:
In particular, patches_list is a very big torch and my GPU memory is only 4G. I guess the possible reason is that the method can’t find a contiguous GPU memory to store it. But in fact, I never received any error. It makes me confused.
.contiguous() is not an in-place operation (it does not have
_ in its name), that means that you should do
patches_list = patches_list.contiguous().
Contiguous will raise an error if it fails !
Oh! Sorry, my fault. Thank you for your quick reply.
Although my previous problem has been solved. But I notice that in other code,
.contiguous() certainly change the torch in-place:
Is the case occasional?
That should not happen, could you give me a small script to reproduce this please? I can’t do it with just creating new tensors.
I thought transposing/permuting a
Tensor always results in a non-contiguous
a = torch.randn(2, 3, 4, 5, 6)
a = a.view(2, 12, 5, 6)
b = a.permute(1, 0 , 2, 3)
a as a contiguous
Tensor. Could you explain, where my misunderstanding is?
In your example you are permuting
a and assigning the result to
a is unchanged and it remains contiguous, while
b gets the permuted tensor and so it is not contiguous.