Using MKL DNN with Distributed Data parallel (DDP)

Hi Everyone.

I have an autoencoder which I am training using DDP. I wanted to try and improve the performance by using MKLDNN, i tried to convert the model to MKL DNN using the following lines but at runtime, i get an Assertion error. Is MKL DNN not supported for DDP? or am i doing something wrong? any help would be highly appreciated.

    autoencoder = AutoEncoder(layers=layers)
    autoencoderMKL = mkldnn_utils.to_mkldnn(autoencoder)
    ddp_model = DDP(autoencoderMKL)

Error ###
File β€œ/N/u2/p/pulasthiiu/git/deepLearning_MDS/nnprojects/Mnist/”, line 156, in
File β€œ/N/u2/p/pulasthiiu/git/deepLearning_MDS/nnprojects/Mnist/”, line 105, in main
ddp_model = DDP(autoencoderMKL)
File β€œ/N/u2/p/pulasthiiu/python3.8/lib/python3.8/site-packages/torch/nn/parallel/”, line 344, in init
assert any((p.requires_grad for p in module.parameters())), (
AssertionError: DistributedDataParallel is not needed when a module doesn’t have any parameter that requires a gradient.
Traceback (most recent call last):
File β€œ/N/u2/p/pulasthiiu/python3.8/lib/python3.8/”, line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File β€œ/N/u2/p/pulasthiiu/python3.8/lib/python3.8/”, line 87, in _run_code
exec(code, run_globals)
File β€œ/N/u2/p/pulasthiiu/python3.8/lib/python3.8/site-packages/torch/distributed/”, line 260, in
File β€œ/N/u2/p/pulasthiiu/python3.8/lib/python3.8/site-packages/torch/distributed/”, line 255, in main
raise subprocess.CalledProcessError(returncode=process.returncode,
subprocess.CalledProcessError: Command β€˜[’/N/u2/p/pulasthiiu/python3.8/bin/python3’, β€˜-u’, β€˜/N/u2/p/pulasthiiu/git/deepLearning_MDS/nnprojects/Mnist/’, β€˜-w’, β€˜40’, β€˜-ep’, β€˜10’, β€˜-bs’, β€˜8000’, β€˜-rc’, β€˜1024’, β€˜-ds’, β€˜640000’, β€˜-l’, β€˜768x576x432x324’]’ returned non-zero exit status 1.

Best Regards,

I don’t believe that to_mkldnn() modifies the underlying model, just the memory format of the tensors, please let me know if I am wrong. We need more info on the AutoEncoder model and what that looks like. Could you also include the code to the model? Does that have any parameters?

As a reference here is the line that is erroring out: pytorch/ at master Β· pytorch/pytorch Β· GitHub

Hi Huang,

Sorry about the late reply. It is a simple autoencoder, just have some logic to add layers when I specify the number of layers in the autoencoder (the code is below). Am I using the to_mkldnn function incorrectly?

Link to complete code:
Without MKL

With MKL:

class AutoEncoder(nn.Module):
    def __init__(self, **kwargs):
        inner_layers = kwargs["layers"]
        encoder_layers = []
        decoder_layers = []
        num_layers = len(inner_layers) - 1
        print(f"numlayers {num_layers}")
        for x in range(num_layers):
            encoder_layers.append(nn.Linear(in_features=inner_layers[x], out_features=inner_layers[x + 1]))
                nn.Linear(in_features=inner_layers[num_layers - x], out_features=inner_layers[num_layers - x - 1]))

            if (x == num_layers - 1):

        self.encoder = nn.Sequential(*encoder_layers)

        self.decoder = nn.Sequential(*decoder_layers)

    def forward(self, x):
        x = self.encoder(x)
        x = self.decoder(x)
        return x```

Thanks for the model. I just verified that it is failing and mkldnn does change the model layers. I don’t have a lot of context on mkl dnn, but I created an issue on github to track this and loop in the right people, Support for mkldnn + ddp Β· Issue #56024 Β· pytorch/pytorch Β· GitHub.

1 Like

@Pulasthi does it work if you try to convert model to mkl_dnn model and run local training without DDP?

@Yanli_Zhao let me try that out and get back to you