Why built-in sum func can deal with Tensor?

Hi, all:
I found the built-in python sum function can deal with Tensor data type. That’s weird!
Moreover, the result of sum(a, b) is distinct from sum([a, b]), Could anyone give some insight?

In [8]: a
tensor([[1., 1., 1.],
        [1., 1., 1.]], device='cuda:0')

In [9]: b
tensor([[1., 1., 1.],
        [1., 1., 1.]], device='cuda:0')

In [10]: sum(a,b)
tensor([[3., 3., 3.],
        [3., 3., 3.]], device='cuda:0')

In [11]: sum([a,b])
tensor([[2., 2., 2.],
        [2., 2., 2.]], device='cuda:0')

1 Like

PyTorch Tensors are iterables, so Python will iterate over Tensor and sum each element. This will be done by Python getting each pair of elements in the left-most dimension of the Tensor, from left to right, and calling __add__ on them. It’s going to be slower, because it’s equivalent to:

x = torch.randn(10, 20)
result = sum(x)
# equivalent to
result = (((((((((x[0] + x[1]) + x[2]) + x[3]) + x[4]) + x[5]) + x[6]) + x[7]) + x[8]) + x[9])

The API for sum is sum(iterable[, start]), so when you do sum(a, b), it actually uses b as a starting value. However, because b here itself is an iterable and not a number, it first reduces b to a single number.
I dont know how python reduces b to a single number, it’s undocumented.

When you do sum([a, b]) it’s more in line with what you actually tried to do, i.e. sum an iterable [a, b] from left to right, so it does: sum([a, b]) equivalent to a + b

1 Like