Train multiple models on single gpu in parallel?

I am doing leave-1-out cross-validation and I only have 1 GPU at my disposal (I use Colab). Currently the different model instances are trained sequentially. This takes a lot of time and also the “RAM” bar in Colab is nearly empty, which leads me to believe that one can do better by somehow loading multiple models into the GPU and training them simultaneously. Is this possible? Paid services / frameworks are not an option for me.

1 Like