Hi all, I work mostly on computer vision problems and I’ve found that in CV-related computation there are usually tons of tensor flipping (e.g., reshape, switching axes, adding new axes), which might result in non-contiguous tensors (a super good explanation here). Sometimes people will deliberately try to keep tensors to be contiguous, for example the following line from the popular detectron2’s detectron2.data.dataset_mapper.DatasetMapper
:
dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1)))
is making sure that an image tensor with contiguous memory is created.
But most of the time people don’t care much about keeping them contiguous, with crazy tensor flipping all over the place (no blaming but just to describe how people make full use of the flexibility provided by the framework lol).
I wonder if there are any general guidelines in dealing tensor’s memory? Is it always better to use contiguous tensors? Thanks!