Add tensors within a tensor

I have quite a basic question but I could not find a similar question answered elsewhere.

I would like to add two tensors within a larger tensor. Specifically, I have a tensor of shape (batch, seq_len, max_len, embedding_dim) and I want to add all of the tensors in max_len so that I am left with (batch, seq_len, embedding_dim), e.g. my new tensor’s embedding_dim is all of the max_len embedding dims added together.

I couldn’t find a straightforward way to do this without using a python for loop to create a new tensor which is the sum of the n tensors of max_len. Also, a.sum(-1) is not what I need as it just adds up all of the numbers along the row-dim. Instead (based on the below example), I would like each tensor to be :

[ 2.3677,  1.0233,  0.6200, -1.1026, -0.1339] + [-0.2428,  0.0120, -0.6363,  0.8928, -0.6387]
[-0.4187,  1.4169, -0.1631,  0.5966, -0.2203] + [-0.6647,  0.9166, -1.8717, -2.5794, -1.1827]

and so on.

Any help would be appreciated, thanks!

batch=2
seq_len=3
max_len=2 # max_len is the max number of components in the batch for this feature
embedding_dim=5

a = torch.randn(batch, seq_len, max_len, embedding_dim)
print(a.size())
print(a)


torch.Size([2, 3, 2, 5])
tensor([[[[ 2.3677,  1.0233,  0.6200, -1.1026, -0.1339],
          [-0.2428,  0.0120, -0.6363,  0.8928, -0.6387]],

         [[-0.4187,  1.4169, -0.1631,  0.5966, -0.2203],
          [-0.6647,  0.9166, -1.8717, -2.5794, -1.1827]],

         [[ 1.7884, -0.6862, -0.8162, -0.5604,  0.0198],
          [ 0.6459,  0.3187,  1.9847,  2.4008,  0.3162]]],

        [[[ 0.7555,  2.9180,  0.7002,  2.8533, -1.9177],
          [-0.0804,  0.4459,  0.4908,  1.3397, -1.8577]],

         [[ 0.2427, -2.0033, -0.6455,  0.0856, -0.4497],
          [-0.4160,  1.3193, -0.0158,  0.1067,  0.8288]],

         [[-3.3968,  0.5313,  0.4720,  0.8234, -0.3139],
          [ 0.6371, -1.4486, -1.5495, -1.0803,  0.2780]]]])

I’m not sure, if I misunderstand the question, but would a.sum(2) work?
This would apply the sum operation in dim2.

Indeed you are correct! I tried this a couple of hours ago and must have miscalculated what I expected the result to be and then I got stuck trying much more complicated approaches! Thanks so much for clearing that up!