I have RTX 6000 with about 48 GB. Right now I am training three models each with a separate python script. So they are three processes, each taking 15 GB.
My question is if training all three models simultaneously would be slower than if training one at a time? I have enough memory for all three, but maybe they are competing for resources and causing slow-down?