If an operation expects a contiguous tensor, it’ll make the memory contiguous explicitly since otherwise the indexing might be wrong which would influence the training in a bad way.
Yes, reshape
combines a view
and contiguous
op. As described, PyTorch will either raise an error if a contiguous tensor is expected in an operation or call .contiguous
internally.