Training isues: validation loss is lower than training loss

Hi I have a regression problem and designed a deep neural network… I run train mynetwork with small dataset an this is my loss versus epoch diagram: red: validation loss, blue: training loss

my validation loss is lower than training loss… my other problem is generalization… when I train my network with all of training data, the loss doesnt change… could you mind helping me solve this?

That’s normal while training. This is because your training loss is averaging every batch loss between the epoch beginning and end. But your validation loss is only calculated after that epoch has already finished. So the validation will always show a slight advantage if the network improved during that epoch.

Usage of dropout layers might also yield a higher training loss, since your model would be a “small” version of the validation model.