Training isues: validation loss is lower than training loss

Hi I have a regression problem and designed a deep neural network… I run train mynetwork with small dataset an this is my loss versus epoch diagram: red: validation loss, blue: training loss

image
my validation loss is lower than training loss… my other problem is generalization… when I train my network with all of training data, the loss doesnt change… could you mind helping me solve this?

1 Like

That’s normal while training. This is because your training loss is averaging every batch loss between the epoch beginning and end. But your validation loss is only calculated after that epoch has already finished. So the validation will always show a slight advantage if the network improved during that epoch.

1 Like

Usage of dropout layers might also yield a higher training loss, since your model would be a “small” version of the validation model.