Evaluation with tranformations object detection

Hello everyone,
I’m trying to train an instance segmentation model following this PyTorch tutorial.
Everything works fine but I’m wondering if setting a data_loader_test without transformations (and by consequent evaluating the model without transformations) will create problems regarding the training of transformed images (I’m particularly concerned about rotated images).
I tried adding transformations to the data_loader_test but the evaluation precision/recall were extremely affected so I guess I’m not supposed to do that…
Any thoughts ?
Thanks

Hello @Fanny, how many data were you using for the training? And which data did you use for the evaluation?
Basically, applying data augmentation or transformation is beneficial during training in order for a model to be able to generalise unseen data from another or slightly different distribution better- in short it is almost necessary to apply data augmentation.

I use ~10 000 images. I splitted them into 7000 images for training, 1500 for evaluation and 1500 for testing (after training).
I applied data augmentation on the 7000 images used to train the model, but it appears that I cannot apply data augmentation on the evaluation images without affecting (a lot) evaluation precision/recall.
Yet I’ve learnt that evaluation is used for the model to adjust its parameters. Is this the case when training with torch ?

Ah I see, I think I kind of misunderstood your case.
You are correct for applying the data augmentation on the training data. It is however not necessary to apply the same of further augmentation for the evaluation, since you would only want to see the performance and then adjust its parameters if necessary.
So for the evaluation usually you might want to just apply normalisation.

You could apply it both on training and validation set, but it’s not necessary.

Thanks for the reply :slight_smile:
In my understanding, evaluation is used to adjust the model’s weights. However, if I use 50% of rotated images in my training set and 0% in my valdation set (no transforms), how will the model adjust its weights to better detect rotated images ?

You are correct and such patterns are exactly what the model learns during trainign:D, i.g. by applying data augmentation we introduce more variance or in general more data that might represent “real life data” are increased for the model to learn on (more data are good for training), so the model is able to generalise better afterwards. During validation it’s not necessary to generate more data since we want to see how the model performs on different distribution. You could perform data augmentation there to see what the outputs will be if you have rotated image as input but in my opinion it would kill the purpose of the evaluation.

After the evaluation, depending on how good the model performed or how satisfied we are, we would adjust the training parameters such as learning rate, batch size, etc.
So it’s not quite true that evaluation is used to adjust the model’s weights since the weights will not be updated, instead the training parameters for the next training.
But feel free to share what you think about it!