Get the mean from a list of tensors

Hi!
I have a list of tensors

my_list = [tensor(0.8223, device='cuda:0'), tensor(1.8351, device='cuda:0'), tensor(1.4888, device='cuda:0'),]

and i want their mean. I tried torch.mean(my_list), but got this error

TypeError: mean(): argument 'input' (position 1) must be Tensor, not list

Then I tried this np.mean(my_list) and got this error

in _mean ret = ret.dtype.type(ret / rcount)
AttributeError: 'torch.dtype' object has no attribute 'type'

This solution works for me. Is there a better way of doing this and not using detach() first?

new_list = [x.cpu().detach().numpy() for x in my_list]
my_mean = np.mean(new_list)

Also, what to do If i want the returned value to be a tensor(my_mean, device='cuda:0').
I tried this torch.cuda.FloatTensor(my_mean) but got this error

TypeError: new(): data must be a sequence (got numpy.float32)

Thank you.

7 Likes

Hi,
If each element tensor contain a single value, you can use .item() on it to get this value as a python number and then you can do mean(your_list).
If you want to work with Tensors to keep gradients for example. You can use torch.cat(your_list, 0) to concatenate the list into a single tensor. Then you can call .mean() on the resulting tensor.

3 Likes
mean = torch.mean(torch.stack(my_list))
34 Likes

Thanks. Tried torch.cat(my_list, 0), but got this error

RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated

@smth solution works

Thank you all for the quick replies

The accepted solution works for 0-dim tensor or only when a global mean is required. For the sake of completeness I would add the following as a generalized solution for obtaining element-wise mean tensor where input list is multi-dimensional same-shape tensors.

torch.mean(torch.stack(my_list), dim=0)

20 Likes

@ntomita Hi, looking at your great answer, any hint on how to do something similar but with min or max?

Hi @Mauricio_Maroto. Hope you already found an answer to your question. You can use torch.max and torch.min functions in the similar way as torch.mean, except you ignore indices output.

maxvals, _ = torch.max(torch.stack(my_list), dim=0)

Hi @ntomita yes, I figured about that same answer of yours, noticing the stack pytorch logic for this kind of operation, and then considering the dim too. Thanks anyway!

Hi,

If each element of a tuple/list has a tensor, and I would like to find the mean of each tensor at dim=1. How can it be done without the need of a for loop?

Example:

sample = (tensor([[0.], [0.], [0.]], device='cuda:0'), 
                 tensor([[2., 3.], [2., 4.], [4., 3.]], device='cuda:0'),
                 tensor([[1.], [3.], [4.]], device='cuda:0'), 
                 tensor([[1., 3.], [0., 0.], [4., 2.]], device='cuda:0'))

sample = (tensor([[0.], [0.], [0.]], device=‘cuda:0’),
tensor([[2.5.], [3.], [3.5]], device=‘cuda:0’),
tensor([[1.], [3.], [4.]], device=‘cuda:0’),
tensor([[2.], [0.], [3.]], device=‘cuda:0’))

Thanks

you could try,

tuple([t.mean(dim=1) for t in sample])

Example,

sample = (torch.randn(3,1), torch.randn(3,2), torch.randn(3,1))
>>> sample
(tensor([[-0.6243],
        [ 0.2206],
        [-1.9922]]), tensor([[-0.7084,  2.9351],
        [-0.8864, -0.7852],
        [-0.3822,  0.1747]]), tensor([[-2.1211],
        [ 1.0442],
        [ 0.8821]]))
>>> tuple([t.mean(dim=1) for t in sample])
(tensor([-0.6243,  0.2206, -1.9922]), tensor([ 1.1134, -0.8358, -0.1038]), tensor([-2.1211,  1.0442,  0.8821]))