Basic question about torchvision.transforms

Hello Everyone, basic question about torchvision.transforms.

If my training transform is defined as this:

train_transform = transforms.Compose([

Does this mean that with every epoch I’m actually training with a different data set?

At every epoch, new transforms will be applied to your data.
Here is a full explanation about what is going on.

Good luck

1 Like

Excellent info @Nikronic. That matches what I see during runtime. Any thoughts why the PyTorch team took this approach vs a more static augmentation option? I’m thinking changing the training data on the fly so to say could have both positive and negative effects.

I do not know exactly why this approach is chosen, but it solved my problem even on huge datasets like Places with about 1.8 million images. I augmented it 6x bigger and everything seems fine!

Maybe if you explain your question more specifically, one of PyTorch developers answer you.

Good luck mate