Linear layer with custom connectivity

Hi there,

I am new to PyTorch, but I want to create a verysimple linear layer with custom connections.
As an example, I want to create the following connectivity:
my_net

My code does not work:

class Net(nn.Module):
  def __init__(self):
    super(Net, self).__init__()
    self.l1 = nn.Linear(n_feats, 3, bias=False)
    self.t = Variable(torch.randn(N_BATCH, 2))
    self.neurons = [nn.Linear(2, 1), nn.Linear(2, 1)]

  def forward(self, x):
    x = F.relu(self.l1(x))
    self.t[:, 0] = self.neurons[0](x[:, :2])
    self.t[:, 1] = self.neurons[1](x[:, 1:])
    return t

Besides being ugly, the I expect to have (n_feats x 3 + 2 * 2 + 2) parameters. My model displays only (n_feats x 3) parameters.

What is the correct way to do this?

You can either force the weights of the non-existent connexions to equal zero, or do (I think it’s what you are trying) two different fully connected layers. the second solution seems more elegant.

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(2,1)
        self.fc2 = nn.Linear(2,1)

    def forward(self, x):
        splitted = torch.split(x,1,1)
        x1 = torch.cat(splitted[:2],1)
        x2 = torch.cat(splitted[1:],1)
        x1 = F.relu(self.fc1(x1))
        x2 = F.relu(self.fc1(x2))
        return torch.cat([x1,x2],1) 
1 Like

no such thing as ugly or elegant, cpux, write any is ok

How would the backward work? In the same way? Or is there a change to be made?