Test time augmentation

Hi everyone,

Test time augmentation (TTA) can be seen in many academic literature and kaggle solutions. I have managed to implement a ten crop augmentation when testing, where the predictions on ten crops are averaged.

I see some more complex TTA strategies in He et al., 2015 and the resnet paper Deep Residual Learning for Image Recognition. For example, the following is taken from He et al., 2015:

We adopt the strategy of “multi-view testing on feature maps” used in the SPP-net paper [11]. We further improve
this strategy using the dense sliding window method in [24, 25]. We first apply the convolutional layers on the resized full image and obtain the last convolutional feature map. In the feature map, each 14×14 window is pooled using the SPP layer [11]. The fc layers are then applied on the pooled features to compute the scores. This is also done on the horizontally flipped images. The scores of all dense sliding windows are averaged [24, 25]. We further combine the results at multiple scales as in [11]

Are there any implementations for similar TTA strategies?

Thank you!
Yan

2 Likes