Torch.max(input, other, out=None) doesn't work on gpu

I am trying to run torch.max(t1, t2) to find element wise max between two Variables. It works on a cpu, but doesn’t work when I run it on gpu.

Gives me the error:

TypeError: gt received an invalid combination of arguments - got (torch.cuda.FloatTensor), but expected one of: * (float value) didn't match because some of the arguments have invalid types: (torch.cuda.FloatTensor) * (torch.FloatTensor other) didn't match because some of the arguments have invalid types: (torch.cuda.FloatTensor)

I have already called .cuda() of the module in which I wrote this code.
Do I need to make any other changes to call it on GPU? Or is it currently not implemented for the GPU?

1 Like

From the error message, it sounds like t1 is a CPU tensor while t2 is a CUDA tensor (or maybe the other way around). Could you check with t1.is_cuda, t2.is_cuda?

For this to work both t1 and t2 should be on the GPU.