Odd behavior max and sum outside python shell

If I do torch.max(a) inside the python shell that returns <type 'float'>. But if I run try this in my script, it returns [torch.cuda.FloatTensor of size 1 (GPU 0)]. Same issue with torch.sum().

Shell:

>>> a = torch.FloatTensor([3])
>>> a

 3
[torch.FloatTensor of size 1]

>>> type(torch.max(a))
<type 'float'>

Script:

print 'losses: '+str(torch.max(loss))

outputs:

[torch.cuda.FloatTensor of size 1 (GPU 0)]

Is this a bug or am I doing something wrong?

Not a bug.
This is because you enabled CUDA in your script but disabled CUDA in Python interactive shell.
In fact, CUDA is disabled by default unless you enable it explicitly.
So you must have enabled CUDA somewhere in your script code.