I am using a 12 GB GPU (Titan Xp) and I was surprised to notice that running two experiments (each takes around 4GB of memory; at different terminals) of the same PyTorch program need twice as much time as each separately.
I’m not sure if I miss something here, or I need to adjust some parameters to improve speed performance. Or, so be the situation in this case!
NB. I am using 8 workers in the dataloader.