When we are creating transformations and augment images, those are generally probabilistic. So model is generally not seeing original images, it always sees transformed images, isn’t it a problem. Wouldn’t an approach that both feed transformed and original image be better ?
Actually, that is not the case. Some transforms like cropping or resizing image are made to all images for instance for computational reasons.
But other transforms like, affine, colorjitter, etc can be done randomly which in this case, you provide
p=your_desired_prob to ensure transformation is only happening with probability of
p. So, there
p-1 of augmented images of that transform are original images.
For instance, you can see the docs of RandomApply.
Yes but we generally stack bunch of transformations, so usually at least one of them is ending up happening, so usually model is not seeing original images, but transformed ones?