Training on validation data before test set

I’ve been training my model using a training/validation split, and I have something that I’m happy with. Before evaluating the model on the test set, though, I’m not sure if I should add in the validation data to the model’s training.

If I were to retrain the model including the validation data, I’m not sure how I would monitor overfitting either. That is, I normally monitor the loss and accuracy of the validation data, and if there was no longer any validation to be monitored, I don’t know when I would stop training.

Does anyone know what best practice here is?

That is a valid concern and thus I wouldn’t train the model on the validation data.
If you retrain it and use e.g. the test dataset to stop the training, you are leaking data and the test set was thus used during the training and is invalid.