Odd behavior max and sum outside python shell

If I do torch.max(a) inside the python shell that returns <type 'float'>. But if I run try this in my script, it returns [torch.cuda.FloatTensor of size 1 (GPU 0)]. Same issue with torch.sum().


>>> a = torch.FloatTensor([3])
>>> a

[torch.FloatTensor of size 1]

>>> type(torch.max(a))
<type 'float'>


print 'losses: '+str(torch.max(loss))


[torch.cuda.FloatTensor of size 1 (GPU 0)]

Is this a bug or am I doing something wrong?

Not a bug.
This is because you enabled CUDA in your script but disabled CUDA in Python interactive shell.
In fact, CUDA is disabled by default unless you enable it explicitly.
So you must have enabled CUDA somewhere in your script code.