Torch.max() unexpected behavior

I am using torch.max on a torch tensor along a specific direction. When I try to use the result, I get myriad errors. I am able to display the contents of the result (see code snippet), but not able to use it as a tensor: as a mathematical operation or to display its size (errors below). However, when I do the same operation on the entire tensor to find the max scalar value, I am able to use it. The same holds for torch.min().

Would appreciate any insight into this behavior - how do I get a tensor which has the max/min value of another tensor in a given dimension? I don’t want to convert the tensor to numpy and do np.max, since I plan to use this operation inside a loss function and therefore need it to be tracked for gradients.

>>> import torch
>>> a = torch.rand((20,3,128,128,128))
>>> maxval= torch.max(a, 1)
>>> maxval[:5] # [0.8795, 0.9569, 0.5381,  ..., 0.5498, 0.8159, 0.8428], 
         # [0.8837, 0.8009, 0.6006,  ..., 0.3414, 0.9229, 0.6836],
         # [0.9956, 0.5561, 0.8130,  ..., 0.7098, 0.7955, 0.6614]]],
>>> maxval.size()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'torch.return_types.max' object has no attribute 'size'
>>> a_scaled = a - maxval
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: sub(): argument 'other' (position 1) must be Tensor, not torch.return_types.max
>>> maxscalar= torch.max(a)
>>> maxscalar
>>> maxscalar.size()
1 Like

torch.max returns a tuple (max_values, indices). Hence, you are unable to call size() on the tuple. Try calling maxval[0].size().

The error messages seem to be different in your case. Which pytorch version are you using? I am getting error like below in version 1.0.1.post2.

>>> maxval.size()                                                                                                                                
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'tuple' object has no attribute 'size'

'tuple' object has no attribute 'size'

Thanks Arul - that makes a lot of sense.
This is my output now:

>>> maxval = torch.max(a,0)
>>> maxval[0].size()
torch.Size([3, 128, 128, 128])
>>> maxval[0][0,0,0,:5]
tensor([0.9971, 0.9611, 0.9765, 0.9123, 0.9630])
>>> maxval[1][0,0,0,:5]
tensor([14,  0,  2, 18, 17])

So the second item in the tuple is list of indices and they are of the same size as that of the first item in maxval tuple.

My torch version is ‘1.1.0a0+7c2290e’