Model paralleism training on two node

Hi,
Is it possible to train any model with model parallelism on two machine?
I understand If I want to do model parallelism on multi machine with DDP, I need to use RPC, right?
I want to use DDP so I read this tutorial. but I think it requires four machine.

https://pytorch.org/tutorials/advanced/rpc_ddp_tutorial.html#

Thank you.

Hi,

The above tutorial (and general Distributed APIs) can work on an arbitrary no. of machines.