class DistributedModel(nn.Module):
def __init__(self):
super().__init__(
embedding=nn.Embedding(1000, 10),
rnn=nn.Linear(10, 10).cuda(0),
)
def forward(self, x):
# Compute embedding on CPU
x = self.embedding(x)
# Transfer to GPU
x = x.cuda(0)
# Compute RNN on GPU
x = self.rnn(x)
return x
This is come from the Part of the model on CPU and part on the GPU in
http://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html
This code can’t be compiled whenever I use py2.7 or py3.5 because there must be arguments sent to super and nn.Module.init don’t receive any arguments. So what does the code want to say? Is it wrong?