How to train a parallel network?

def forward(self, x):
net1_out = self.feature1(x) #input 32x32
net1_out = net1_out.view(net1_out.size(0), -1)
net1_out = self.classifer1(net1_out)

    x2 = x.resize_(self.batch, 3, 48, 48)
    net2_out = self.feature2(x2)
    net2_out = net2_out.view(net2_out.size(0), -1) #[128, 2250]
    net2_out = self.classifer2(net2_out)

    x3 = x.resize_(self.batch, 3, 54, 54)
    net3_out = self.feature3(x3)
    net3_out = net3_out.view(net3_out.size(0), -1)
    net3_out = self.classifer3(net3_out)

    net_result = (net1_out + net2_out + net3_out) / 3
    return net_result

I want to train a parallel network which have different input, just like code in my forward. It will train success when only use a sub-network. But if train two or three sub-network, it raise error:

loss.backward()
torch.autograd.backward(self, gradient, retain_graph, create_graph)
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDNN_STATUS_BAD_PARAM

Thanks for your help.

I think the error is thrown in

x1 = x.resize_(...)

If you run it on CPU, you will get:

RuntimeError: cannot resize variables that require grad

I think you could try to upsample your tensor with grid_sample.