If I do torch.max(a)
inside the python shell that returns <type 'float'>
. But if I run try this in my script, it returns [torch.cuda.FloatTensor of size 1 (GPU 0)]
. Same issue with torch.sum()
.
Shell:
>>> a = torch.FloatTensor([3])
>>> a
3
[torch.FloatTensor of size 1]
>>> type(torch.max(a))
<type 'float'>
Script:
print 'losses: '+str(torch.max(loss))
outputs:
[torch.cuda.FloatTensor of size 1 (GPU 0)]
Is this a bug or am I doing something wrong?