A tensor whose values are laid out in the storage starting from the rightmost dimension onward (that is, moving along rows for a 2D tensor) is defined as
contiguous
. Contiguous tensors are convenient because we can visit them efficiently in order without jumping around in the storage (improving data locality improves performance because of the way memory access works on modern CPUs). This advantage of course depends on the way algorithms visit.Some tensor operations in PyTorch only work on contiguous tensors, such as
view
, […]. In that case, PyTorch will throw an informative exception and require us to call contiguous explicitly. It’s worth noting that callingcontiguous
will do nothing (and will not hurt performance) if the tensor is already contiguous.
Note this is a more specific meaning than the general use of the word “contiguous” in computer science (i.e. contiguous and ordered).
e.g given a tensor:
[[1, 2]
[3, 4]]
Storage in memory | PyTorch contiguous ? |
Generally “contiguous” in memory-space? |
---|---|---|
1 2 3 4 0 0 0 |
![]() |
![]() |
1 3 2 4 0 0 0 |
![]() |
![]() |
1 0 2 0 3 0 4 |
![]() |
![]() |