I read about cross-validation and I implemented it recently. My question is, after the cross-validation process, we have to test the model, right? Because it is the training set that is used for the cross-validation process (the training set is split Train/test). If yes, does that mean that I have to save the model after each fold? Which one do I have to use then, as I will have 5 saved models in the case of 5 K-fold cross-validation? Or can I save the model and load it at each fold before the training?
In my understanding, the model should be randomly initialized at the start of the training in each fold.
After training the model using the training set of a particular fold, find out the performance on the test set of that fold and save this number. (saving the model, data configuration is optional depending on the need for reproducibility).
At the end of all
k folds, (typically) take the mean of the performances.