In-place copy of tensors sharing the same underlying storage

x = torch.randn(size=(10, 10))
x[:, 0:8] = x[:, 1:9]
x[0:8, :] = x[1:9, :]

In pytorch (2.6) the second assignment will error with:

RuntimeError: unsupported operation: some elements of the input tensor and the written-to tensor refer to a single memory location. Please clone() the tensor before performing the operation.

But I think the first assignment is also unsafe and can result in undefined behavior? Why does it not error except on the first dimension or is there some reason that it is actually safe?