Hello!
I had a question regarding validation loss.
I’m doing semi-supervised binary semantic segmentation with 20% laballed data, my predcited mask is improving every epoch, and the metrics at each epoch is quite good, for exmaple:
Epoch: 6,Running Train loss: 0.018475, Running Validation loss: 0.153047, Validation Accuracy: 94.0433, Dice Score: 93.5111, BinaryJacIndx Score: 89.1448
My problem is for the longest time I though my model is overfitting, even though augmented the training images (Reszied random crop, random rotation, random horizontal flip, Color jitter and Gaussian Blur), I also made sure to balance my training data.
I’m using a batch size of 32, the training data is roughly 5120 images so the length of the trainning loader is 160, my valdiation data is about 1100 images and the length of the validation loader is 31.
What I’m doing is I’m dividing the running training loss by the length of the training loader and running validation loss by the length of the validation loss.
Should I multiply the length of the loaders by the batch size ( running loss/ length of loader* batchsize), or is what I’m already doing is correct and the model is indeed overfitting?
Thank you!