Get validation set out of training set

I noticed that when similar questions are asked the commonly given advice is to use SubsetRandomSampler or random.split. Isn’t it wrong, though?

I always thought that, during validation, the validation set should be the same through all the different runs, since, changing it every time wouldn’t allow getting a good estimate of the error given the hyperparameters.

I guess the solution is as simple as setting a manual seed before calling random.split and then setting it back to its initial value. I just wonder if my idea of how validation works is correct.

I would say it depends a bit on your use case.
If you are running some quick experiments, setting the seed and splitting the datasets randomly might be sufficient to reproduce your results.
However, if you really need the same splits for each run, you could sample the split indices once, store them locally, and use them in a Subset to get the same training, validation (and test) sets.