Changing the size of nn.Linear depending on example size

I am using the nn.Linear layer as part of a graph classifier (in pytorch geometric). The network looks like this:

class Net(torch.nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = GraphConv(in_channels=768, out_channels=16) 
        self.conv2 = GraphConv(in_channels=16, out_channels=2) 
        # PROBLEM
        self.fc1 = nn.Linear(16 * 2, 2)

    def forward(self, data):
        x, edge_index, edge_weight = data.x, data.edge_index, data.edge_attr

        x = self.conv1(x, edge_index, edge_weight)
        x = F.relu(x)
        x = F.dropout(x, training=self.training)
        x = self.conv2(x, edge_index, edge_weight)
        x = F.relu(x)

        #PROBLEM
        x = x.view(-1, 16 * 2)
        x = self.fc1(x)

        return F.softmax(x, dim=1)

Now ,I am feeding a list of graphs of different sizes, such as:


[Data(edge_attr=[218, 1], edge_index=[2, 218], x=[203, 768], y=[1]),
 Data(edge_attr=[1306, 1], edge_index=[2, 1306], x=[1281, 768], y=[1]),
 Data(edge_attr=[244, 1], edge_index=[2, 244], x=[234, 768], y=[1])]

So I need to somehow be able to read the 0th shape of x before defining the size of my linear layer and doing the view reshaping. How can this be done?

You could use the functional API via F.linear and define the weight and bias depending on your input shape.
However, this would create a new parameter for each input shape and I’m not sure, if that’s what you want.

Usually you would add e.g. an adaptive pooling layer to make sure the incoming activation has always a defined shape.