K fold Cross Validation

I am having a question that, According to my understanding, the validation set is usually used to fine-tune the hyperparameters and for early stopping to avoid overfitting in the case of CNN/MLP. But when we are dealing with the k fold cross-validation. Still, we can use validation dataset to tune typer parameters and save the checkpoints (Network weights) on which we achieve best validation result?

@rasbt posted a great post on this topic here, which explains some common use cases and advantages of using certain approaches.

Thanks for response but I my case I am having very small dataset unfortunately. (Like 40 samples)