Tensor stride not updated by .contiguous()

I recently got the error message ‘*** RuntimeError: cuDNN requires contiguous weight tensor’ when performing a convolution with a filterbank which was previously transposed and made contiguous. This only happens when the size of one of the axes is 1.

To pin the error down I wrote the following example:

import torch
def debug_contiguous(x):
	xt = x.t()
	print(x.__getstate__()[2:])
	print(xt.is_contiguous())
	print(xt.__getstate__()[2:])
	print(xt.contiguous().__getstate__()[2:])
	print(xt.clone().__getstate__()[2:])
debug_contiguous(x=torch.rand(2,3))
debug_contiguous(x=torch.rand(2,1))

which prints

((2, 3), (3, 1)) # prints size, strides
False
((3, 2), (1, 3))
((3, 2), (2, 1))
((3, 2), (2, 1))

((2, 1), (1, 1))
True
((1, 2), (1, 1))
((1, 2), (1, 1))  <-- stride should be (2,1)
((1, 2), (2, 1))

For the tensor of size (2,3) everything works as expected, xt.is_contiguous() is False and xt.contiguous() makes xt contiguous (which is reflected in the new strides (2,1)).
If the tensor has size (2,1) the transposed size is (1,2). Both have the same linearized layout in the memory which may be the reason why xt.is_contiguous() returns True. However, the stride after transposition is (1,1). As far as I understand, calling xt.contiguous() should update the stride to (2,1) which is not the case. A workaround is to call xt.clone() which indeed results in the correct stride.

I did not find the code of .contiguous in github but I guess the missing update of the stride may come from xt.is_contiguous() being True?

Hi,

Since the first dimension contains a single element, the value of the stride for this dimension will never be used. It can thus be an arbitrary value. As you can see here in the code for isContiguous, dimensions of size 1 are ignored.

For the cudnn error, this is a false positive error that is now fixed in master.