"Cumulative" average of every N elements along a tensor dimension

Hi, I want to compute the average of every N elements in a 1D torch tensor of size M. As an example, if my tensor was tensor([7, 4, 8, 2, 6]) and N = 3, I want the output to be tensor([6.3333, 4.6667, 5.3333]) - since (7+4+8)/3 = 6.3333, (4+8+2)/3 = 4.6667, and (8+2+6)/3 = 5.3333.

To be precise, my input tensor has dimensions (d_0, d_1, ..., d_{n-1}, d_n). I take as input two variables - 1 <= N <= d_{n-1} and 0 <= idx <= d_n. I want as output a tensor with dimensions (d_0, d_1, ..., d_{n-1} - N + 1) where I have averaged over every N elements of index idx of dimension d_n as before. Here’s some code showing how I do it at the moment and the result:

d0, d1, d2 = 2, 5, 4
N, idx = 3, 2
input = torch.randint(0, 10, (d0, d1, d2)).float()
output = torch.cat([torch.tensor([input[d0idx, i:i+N, idx].mean(dim=-1) for i in range(0, d1 - N + 1)]).unsqueeze(0) for d0idx in range(input.shape[0])], dim=0)
print(input)
>>> tensor([[[0., 0., 3., 7.],
         [5., 1., 2., 4.],
         [2., 8., 7., 5.],
         [4., 0., 0., 2.],
         [2., 8., 5., 3.]],

        [[1., 7., 1., 4.],
         [8., 4., 4., 7.],
         [3., 8., 4., 0.],
         [0., 4., 1., 0.],
         [6., 4., 7., 2.]]])

print(output)
>>> tensor([[4., 3., 4.],
        [3., 3., 4.]])

print(output.shape)
>>> torch.Size([2, 3])

Ideally I do not want to implement this like I have right now with the nested list comprehension. What is the best way to do this?

Thanks!

Hi Imperial!

Set up a Conv1d that has the desired kernel. For the example you give, you
would want a kernel_size of 3 and you would set all three kernel values to
0.33333.

If I understand what you want here correctly, you would first select out the idx
slice of dimension d_n. Then reshape() dimensions d_0, ..., d_{n - 2}
into a single “batch” dimension, while using the reshape() to add a singleton
channels dimension. Apply the Conv1d and reshape() again to get your
desired output.

Best.

K. Frank

Thanks! This worked:

k = torch.nn.Conv1d(1, 1, kernel_size=N, bias=False)
with torch.no_grad():
    k.weight = torch.nn.Parameter(torch.ones_like(k.weight) * 1/N)
    output = k(input[..., idx].unsqueeze(1)).squeeze(1)