I’m trying to apply data augmentation with pytorch. In particular, I have a dataset of 150 images and I want to apply 5 transformations (horizontal flip, 3 random rotation ad vertical flip) to every single image to have 750 images, but with my code I always have 150 images.
If you want more images you need to generate them ahead of time. There are a number of libraries that can help you do this. Albumentations and Imgaug come to mind.
But the greater question is what are you trying to accomplish with transformations? If you just want more random samples then train your networks for a longer number of epochs. Every epoch your dataset will be transformed differently (since you have random transformations there) so you’ll get a fresh set of 150 images for your network to chew on.
Lastly, you don’t want to be chaining your transforms as you’ve done above since you are doing the rotations one after another and thus potentially undoing your intended result.