Custom GNN layer using NNConv

I tried to make a GNN class which can make use of my node features along with my edge features of the graph.I have implemented NNConv in order to use the edge features, but I am not able to understand what’s going wrong in this. The overall dataset has many graphs with varying number of nodes and edges. But same node attributes dim (5) and same edge attribute dim (3). But the edge_index size varies as the number of edges change with the different graphs. The following 2 cases of error occurs in my code. Can someone suggest me the needful changes.

Case(a)

x torch.Size([39, 5])
edge_attr torch.Size([44, 3])
edge_index torch.Size([2, 88])
batch_index torch.Size([39])

For batch: DataBatch(x=[39, 5], edge_index=[2, 88], edge_attr=[44, 3], y=[1], smiles=[1], batch=[39], ptr=[2])

Code:

model = GNN(test[0].x.shape[1], test[0].edge_attr.shape[1], test[0].edge_index) 
model = model.to(device)
pred = model(batch.x.float(),  batch.edge_attr.float(), batch.edge_index,  batch.batch)

Error message:

—> 44 x = self.conv1(x, edge_index, edge_attr)
→ 1501 return forward_call(*args, **kwargs)
→ 102 out = self.propagate(edge_index, x=x, edge_attr=edge_attr, size=size)
→ 469 out = self.message(**msg_kwargs)
→ 115 weight = weight.view(-1, self.in_channels_l, self.out_channels)

Error:

1. RuntimeError: shape ‘[-1, 5, 32]’ is invalid for input of size 1408


Case(b)

x torch.Size([24, 5])
edge_attr torch.Size([25, 3])
edge_index torch.Size([2, 50])
batch_index torch.Size([24])

For batch, I have set for 1 graph only; thus my batch is: DataBatch(x=[24, 5], edge_index=[2, 50], edge_attr=[25, 3], y=[1], smiles=[1],, batch=[24], ptr=[2])

The error occurs when I run the following code line:

Code:

model = GNN(test[0].x.shape[1], test[0].edge_attr.shape[1], test[0].edge_index) 
model = model.to(device)
pred = model(batch.x.float(),  batch.edge_attr.float(), batch.edge_index,  batch.batch)

Error:

  1. RuntimeError: The size of tensor a (50) must match the size of tensor b (5) at non-singleton dimension 0
class GNN(torch.nn.Module):
    def __init__(self, feature_size,edge_feat,edge_index):
        super(GNN, self).__init__()
        num_classes = 2
        embedding_size = 32
        n_out = 16

        nn1 = nn.Sequential(nn.Linear(edge_feat,32), nn.ReLU())
        nn2 = nn.Sequential(nn.Linear(edge_feat,32), nn.ReLU())
        
        # GNN layers

        self.conv1 = NNConv(feature_size, 32, nn1, aggr='mean')
        self.pool1 = TopKPooling(32, ratio=0.8)

        self.conv2 = NNConv(32, 64, nn2, aggr='mean')
        self.pool2 = TopKPooling(32, ratio=0.8)

        # Linear layers
        self.linear1 = Linear(32, 16)
        self.linear2 = Linear(16, num_classes)  

    def forward(self, x, edge_attr, edge_index, batch_index):
        
        # First block
        x = self.conv1(x, edge_index, edge_attr)
        x, edge_index, edge_attr, batch_index, _, _ = self.pool1(x, 
                                                        edge_index, 
                                                        None, 
                                                        batch_index)
        x1 = torch.cat([gmp(x, batch_index), gap(x, batch_index)], dim=1)

        # Second block
        x = self.conv2(x, edge_index)
        x, edge_index, edge_attr, batch_index, _, _ = self.pool2(x, 
                                                        edge_index, 
                                                        None, 
                                                        batch_index)
        x2 = torch.cat([gmp(x, batch_index), gap(x, batch_index)], dim=1)

        # Concat pooled vectors
        x = x1 + x2

        # Output block
        x = self.linear1(x).relu()
        x = F.dropout(x, p=0.5, training=self.training)
        x = self.linear2(x)

        return x