Contigious vs non-contigious tensor

It’s a flag indicating, if the memory is contiguously stored or not.
Let use an example to see, how we can get a non-contiguous tensor.

# Create a tensor of shape [4, 3]
x = torch.arange(12).view(4, 3)
print(x, x.stride())
> tensor([[ 0,  1,  2],
        [ 3,  4,  5],
        [ 6,  7,  8],
        [ 9, 10, 11]]) 
> (3, 1)

As you can see, the tensor has the desired shape.
The strides are also interesting in this case. They basically tell us, how many “steps” we skip in memory to move to the next position along a certain axis.
If we look at the strides, we see, that we would have to skip 3 values to go to the new row, while only 1 to go to the next column. That makes sense so far. The values are stored sequentially in memory, i.e. the memory cells should hold the data as [0, 1, 2, 3, ..., 11].

Now lets transpose the tensor, and have again a look at the strides:

y = x.t()
print(y, y.stride())
print(y.is_contiguous())
> tensor([[ 0,  3,  6,  9],
        [ 1,  4,  7, 10],
        [ 2,  5,  8, 11]]) 
> (1, 3)
> False

The print statement of the tensor yields the desired transposed view of x.
However, the strides are now swapped. In order to go to the next row, we only have to skip 1 value, while 3 to move to the next column.
This makes sense, if we recall the memory layout of the tensor:
[0, 1, 2, 3, 4, ..., 11]
In order to move to the next column (e.g. from 0 to 3, we would have to skip 3 values.
The tensor is thus non-contiguous anymore!

That’s not really a problem for us, except, that some operations won’t work.
E.g. if we try to get a flattened view of y, we will run into a RuntimeError:

try:
    y = y.view(-1)
except RuntimeError as e:
    print(e)
> invalid argument 2: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Call .contiguous() before .view().

So let’s call .contiguous() before the view call:

y = y.contiguous()
print(y.stride())
> (4, 1)
y = y.view(-1)

Now the memory layout is contiguous once again (have a look at the strides) and the view works just fine.
I’m not completely sure, but I assume the contiguous call copies the memory to make it continuous again.

That being said, continuous arrays are necessary for some vectorized instructions to work. Also generally they should have some performance advantages, as the memory access pattern on modern CPUs will apparently be used in an optimal way, but I’m really not an expert on this topic, so take these last information with a grain of salt. :wink:

177 Likes