i have a script in which train two models in an independent way, on the same dataset and using the same initialization
pseudo code would read something like
initialize warm-up model
train and store warm-up model
load warm-up model and train using 1st strategy
load warm-up model and train using 2nd strategy
compare performances
is there a way to assign in the script the training to different GPUs so that steps 3&4 happen in parallel without calling a different .py file for each step?
Hi,
I am not aware of any way to do this in PyTorch. However, it seems like your usecase is very easily fixed writing a bash script. You can use the same .py script using different arguments and call it in a bash script as such:
that seems to be the only way to do it, but the problem is that i want to call the script many times and with different values of hyperparameters, so it gets way too complicated…