Implement multi-input, multi-head neural network with different specific forward/backpropagation path

Hello,

I have a special network architecture requirement which a little bit like dropouts, but drop the network modules in the specific forward path and backward path.

The network architecture looks like:

There are multiple input layers and multiple output layers in a single neural network. When we use this neural network, we only forward the data in 2 paths, and backward the gradients as same as it forward before.

for example:


Can we implement this kind of network by using backward hook? Any suggestion is welcomed.
If you want to know more details, please take a look at this slides: Pytorch Implementation Problem - Google Slides

1 Like

I think it should look like something like:

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv_in_1 = nn.Conv2d(...)
        self.conv_in_2 = nn.Conv2d(...)
        self.conv_hidden = nn.Conv2d(...)
        self.fc_out_1 = nn.Linear(...)
        self.fc_out_2 = nn.Linear(...)

    def forward(self. x1, x2):
        x1 = F.relu(self.conv_in_1(x1))
        x2 = F.relu(self.conv_in_2(x2))
        x = torch.cat([x1,x2], dim)
        x = F.relu(self.conv_hidden(x))
        out1 = F.relu(self.fc_out_1(x.view(-1)))
        out2 = F.relu(self.fc_out_2(x.view(-1)))
        return out1, out2

net = Net()
optim = nn.SGD(net.parameters(), lr)

Then, you train it like this:

# CASE 1:
x1,x2, target1, _ = next_training_batch()
out1, _ = net(x1, x2.detach())
loss1 = criterion(out1, target1)
loss1.backward()
optim.step()

The important points are that x2 is detached (so you don’t backward through conv_in_2) and the loss is just wrt out1 (so you don’t backward through fc_out_2).

1 Like

Thank you! I’ll try it later.