I’ve been training my model using a training/validation split, and I have something that I’m happy with. Before evaluating the model on the test set, though, I’m not sure if I should add in the validation data to the model’s training.
If I were to retrain the model including the validation data, I’m not sure how I would monitor overfitting either. That is, I normally monitor the loss and accuracy of the validation data, and if there was no longer any validation to be monitored, I don’t know when I would stop training.
Does anyone know what best practice here is?