i have a script in which train two models in an independent way, on the same dataset and using the same initialization
pseudo code would read something like
- initialize warm-up model
- train and store warm-up model
- load warm-up model and train using 1st strategy
- load warm-up model and train using 2nd strategy
- compare performances
is there a way to assign in the script the training to different GPUs so that steps 3&4 happen in parallel without calling a different .py file for each step?