Will torch.contiguous() throw an error if it fails?

In my code, I write:

patches_list.contiguous()
print(patches_list.is_contiguous())

But the output is:
“False”

In particular, patches_list is a very big torch and my GPU memory is only 4G. I guess the possible reason is that the method can’t find a contiguous GPU memory to store it. But in fact, I never received any error. It makes me confused.

Hi,
.contiguous() is not an in-place operation (it does not have _ in its name), that means that you should do patches_list = patches_list.contiguous().
Contiguous will raise an error if it fails !

Oh! Sorry, my fault. Thank you for your quick reply.

Although my previous problem has been solved. But I notice that in other code, .contiguous() certainly change the torch in-place:


Is the case occasional?

Hi,

That should not happen, could you give me a small script to reproduce this please? I can’t do it with just creating new tensors.

I thought transposing/permuting a Tensor always results in a non-contiguous Tensor.

a = torch.randn(2, 3, 4, 5, 6)
a.is_contiguous()
>> True

a = a.view(2, 12, 5, 6)
a.is_contiguous()
>> True

b = a.permute(1, 0 , 2, 3)
a.is_contiguous()
>> True
b.is_contiguous()
>> False

Calling a.permute keeps a as a contiguous Tensor. Could you explain, where my misunderstanding is?

In your example you are permuting a and assigning the result to b, so a is unchanged and it remains contiguous, while b gets the permuted tensor and so it is not contiguous.