I am trying to run
torch.max(t1, t2) to find element wise max between two Variables. It works on a cpu, but doesn’t work when I run it on gpu.
Gives me the error:
TypeError: gt received an invalid combination of arguments - got (torch.cuda.FloatTensor), but expected one of: * (float value) didn't match because some of the arguments have invalid types: (torch.cuda.FloatTensor) * (torch.FloatTensor other) didn't match because some of the arguments have invalid types: (torch.cuda.FloatTensor)
I have already called
.cuda() of the module in which I wrote this code.
Do I need to make any other changes to call it on GPU? Or is it currently not implemented for the GPU?