Issues with torch.compile in DistributedModelParallel (DMP)

Hi!

I’m recently torch compiling a module before wrapping the model with DMP. The module I try to compile is not sharded. However, it seems the module is not compiled at all by inspecting the pytorch profiler trace. Does torch compile support DMP?

Could you share where DMP comes from and how it’s defined for your model?