I hope you have enough GPUs & memory to scale your system well as n
increases !
Otherwise, you can use the torch.distributed.launch
module. Take a look at this snippet, it might give you a better idea on how you can easily parallelize and even distribute compute using pytorch.