What does the code mean in the pytorch doc?

  class DistributedModel(nn.Module):

    def __init__(self):
        super().__init__(
            embedding=nn.Embedding(1000, 10),
            rnn=nn.Linear(10, 10).cuda(0),
        )

    def forward(self, x):
        # Compute embedding on CPU
        x = self.embedding(x)

        # Transfer to GPU
        x = x.cuda(0)

        # Compute RNN on GPU
        x = self.rnn(x)
        return x

This is come from the Part of the model on CPU and part on the GPU in
http://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html

This code can’t be compiled whenever I use py2.7 or py3.5 because there must be arguments sent to super and nn.Module.init don’t receive any arguments. So what does the code want to say? Is it wrong?

I haven’t tried this tutorial but I think you’ve just got the call to super wrong.
Try:

class DistributedModel(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.embedding = torch.nn.Embedding(1000, 10)
        self.rnn = torch.nn.Linear(10, 10).cuda(0)

    def forward(self, x):
        # Compute embedding on CPU
        x = self.embedding(x)

        # Transfer to GPU
        x = x.cuda(0)

        # Compute RNN on GPU
        x = self.rnn(x)
        return x