What doesn't pytorch support negative slicing?

e.g. x[::-1] is not allowed.

ValueError: step must be greater than zero

Are there any special design reasons from the beginning that supporting this brings in some technical difficulties? There are also some issue trackers that go back and forth for years on this question, e.g.:

torch.flip is for sure one work around, but it has an extra copy that are not necessary most of the times. The main question is, what stops us from the negative slicing? It seems so common (and natural) in other numerical packages (numpy and many other things) and I cannot really come up with any valid reason on skipping the support in pytorch.