The output of
torch.cat((x, x, x), -1) and torch.cat((x, x, x), 1) seems to be the same but what does it mean to have a negative dimension. It is not mentioned in pytorch documentation that int needs to be non-negative.
In torch, dim = -1 means that the operation has to be performed along last dimension, and I think that is why torch.cat((x, x, x,) -1) == torch.cat((x, x, x,), 1)
(not strictly because it’s links and something but you got the idea) in your example.
Adding more explanation to @dazzle-me 's answer. I learned -ive dimension like this:
-ive dimension can come in handy when we are too lazy to figure out tensor’s dimension and then
-1 is last dimension and -2 is the second last dimension and then so on