I think the returned LongTensor always has 1 fewer dimension than the input tensor. Without specifying which dimension in which to get the max values, torch won’t know which LongTensor to return. When the dimension isn’t specified, the singular maximal value in the entire tensor will be returned (as a single element tensor). Such as:
b = torch.randn(4,4)
torch.max(b)
# tensor(1.2384)
The reason is historical: this is the behavior of Lua Torch. We’ve been talking about changing these functions to have consistent behavior when the dimension is specified and when it’s not, but it’s difficult to migrate without breaking backwards compatibility.