Call contiguous() after every permute() call?

I am trying to follow the pytorch code for the ssd implementation (Github link).
Inside ssd.py at this line there is a call to contiguous() after permute().

            loc.append(l(x).permute(0, 2, 3, 1).contiguous())

Following the discussion here, I understand the use of contiguous() but I can’t understand why it was called after permute()?
Is it necessary or good practice to call it after every call to permute()? Is it to avoid any “surprise” errors later in the model while accessing the tensor?

permute changes the tensor so that it is not contiguous anymore. The .contiguous() makes it contiguous.

1 Like

I have never had to call .contiguous() after .permute().
As I understand it, permute works by changing the strides of the view mechanism. The data isn’t moved at all so if it was contiguous before permuting, then it still is.

The only caveat is if you slice the tensor after permuting the dimensions. Depending on the original data layout, slicing can result in taking a non-contiguous chunk of data.

More precisely if you take a slice along the first original dimension, then the result will be contiguous. If you take a slice along any other dimension, then the result will be non-contiguous.

6 Likes

If you want to reshape the tensor using .view(), you definitely need to call .contiguous(). The call is necessary even if you don’t slice after permuting. Check the following example:

test = torch.range(0, 5).view(2, 3)
perm = test.permute(1, 0)
# the following line produces a RuntimeError
perm.view(2, 3)
# but this will work
perm.contiguous().view(2, 3)
1 Like

This works without any errors

test = torch.range(0, 23).view(2, 3, 4)
perm = test.permute(1, 0, 2)
perm.view(3, 2, 2, 2)

but the following produces an error

perm.view(6, 4)
2 Likes