I would like to add a list of tensors together.
I am trying test-time-augmentation (tta) with 6 images of different scales and flips.
Here is the relevant code snippet. I am using ttach, a tta wrapper
for batch_idx, sample in enumerate(test_loader): masks =  img = sample['img'].to(self.device) for transformer in tta_transforms: augmented_image = transformer.augment_image(img) model_output = self.tta_model(augmented_image) deaug_mask = transformer.deaugment_mask(model_output) masks.append(deaug_mask) tta_mask = torch.zeros_like(masks) for mask in masks: tta_mask = torch.add(tta_mask, F.softmax(mask, dim=1)) img_name = sample['img_name'] #segLabel = sample['segLabel'].to(self.device) outputs, sig = self.model(img) tta_mask = torch.div(tta_mask, len(masks))
masks contains a list of 6 tensors [ B x C x H x W ], which is [12 x 7 x 368 x 640]
To add them together, I am doing torch.add(tta_mask, F.softmax(mask, dim=1)) where tta_mask is torch.zeros_like(masks) and then torch.div to divide it by 6 (no. of tensors in the list).
I am wondering if this is the correct way to add a list of tensors together to get the mean? As I am not getting good results (Acc drops from ~95% to 89% with horizontal flip and scales [0.5,1,1.5]) and would like to ensure it is not my implementation that is causing it.
Thank you very much for your help!