Adding a list of tensors together

I would like to add a list of tensors together.

I am trying test-time-augmentation (tta) with 6 images of different scales and flips.
Here is the relevant code snippet. I am using ttach, a tta wrapper

for batch_idx, sample in enumerate(test_loader):
                     masks = []
                     img = sample['img'].to(self.device)
                     for transformer in tta_transforms:
                         augmented_image = transformer.augment_image(img)
                         model_output = self.tta_model(augmented_image)
                         deaug_mask = transformer.deaugment_mask(model_output)
                         masks.append(deaug_mask)
                     tta_mask = torch.zeros_like(masks[0])
                     for mask in masks:
                         tta_mask = torch.add(tta_mask, F.softmax(mask, dim=1))
                     img_name = sample['img_name']
                     #segLabel = sample['segLabel'].to(self.device) 
                     outputs, sig = self.model(img)
                     tta_mask = torch.div(tta_mask, len(masks))

masks contains a list of 6 tensors [ B x C x H x W ], which is [12 x 7 x 368 x 640]
To add them together, I am doing torch.add(tta_mask, F.softmax(mask, dim=1)) where tta_mask is torch.zeros_like(masks[0]) and then torch.div to divide it by 6 (no. of tensors in the list).

I am wondering if this is the correct way to add a list of tensors together to get the mean? As I am not getting good results (Acc drops from ~95% to 89% with horizontal flip and scales [0.5,1,1.5]) and would like to ensure it is not my implementation that is causing it.

Thank you very much for your help!

Use torch.cat() or torch.stack()

Oh, but i thought that
torch.cat() or torch.stack()
would change the dimensions, from an example from Stackoverflow
So if A and B are of shape (3, 4), torch.cat([A, B], dim=0) will be of shape (6, 4) and torch.stack([A, B], dim=0) will be of shape (2, 3, 4).

Which in my case would be
torch.cat([A,B], dim=1) resulting in [12 x 14 x 368 x 640]?
Does this still work when I want the output to be [12 x 7 x 368 x 640]?

If you just want to collect them why not use a list? Or what you expect shape is?

So what I’m actually trying to do is to get the mean of the list of 6 tensors from different augmented images to be used during testing. (test time augmentation)
My expected shape would be the mean of all 6 tensors, with the same shape
[12 x 7 x 368 x 640]

So, torch.stack() solve your problem.