Train multiple models in parallel on the same GPU

Hi,

I’m trying to implement this paper. Basically, at test time, for every test image x_i, I want to fetch images relevant to that test image x_i, finetune the model on the retrieved images, make a prediction on x_i, and then discard the finetuned weights.

Right now, everything is sequential, which basically means that my test batch size is 1, which is pretty slow. Is it possible to have multiple (independent) models on the same GPU that are updated in parallel ?

Thanks,
Lucas

1 Like