Hyperparameter optimising on pre-defined validation set

Hi all,

I am working on a binary classification task where there is significant class imbalance. Therefore, I would like to run 3 different experiments. I have a pre-defined training, validation and test set. The experiments are as follows:

  1. ‘Normal’ data set that is split in to a train/validation/test set. Use validation set for hyperparameter tuning.
  2. Use the train/validation/test sets and then upsample the minority class in the training set, find optimal hyperparameters on validation set, test on test set.
  3. Use train/validation/test set, downsample the majority class in the training set and do as above.

So my question is suitable to do in pytorch? I have done these experiments using SVM in sklearn where you can use pre-defined validation sets. But are there hyperparameter tuning libraries for pytorch that allow for this? I found this, but it seems to be broken at the moment. I suppose I could do the tuning manually but this seems rather inefficient.