Training multiple models simutaneously

I have RTX 6000 with about 48 GB. Right now I am training three models each with a separate python script. So they are three processes, each taking 15 GB.

My question is if training all three models simultaneously would be slower than if training one at a time? I have enough memory for all three, but maybe they are competing for resources and causing slow-down?

Yes, each process will compete for memory (which seems to be fine) and compute resources. Assuming each process is not fully saturating the GPU’s compute resources you would see a speedup, but it depends on the overall setup and how high the GPU utilization of each process is.