How to use the size of the input of GNN layers for in_channels?

I am trying to use TransformerConv to train a model which takes in graphs of different sizes. This is to be used on a graph classification task. My code snippet is as follows:

self.conv1 = TransformerConv(in_channels=-1, out_channels=256)

def forward(self, x_onein, x_twoin, lambd):
        x_one, edge_index_one, batch_one = x_onein.x, x_onein.edge_index, x_onein.batch
        x_one = self.conv1(x_one, edge_index_one)
        return x_one

My input data come in batches of various sizes, and each graph has between tens of nodes to 1200 nodes. Because of this, I require the first input layer to be able to take in graphs of different sizes.
When I run the code, I get the error:

RuntimeError: Trying to create tensor with negative dimension -1: [256, -1]

This is strange since I followed the documentation:
in_channels (int or tuple) – Size of each input sample, or -1 to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.

Any advice on this matter would be greatly appreciated.

I can’t reproduce the issue using:

x1 = torch.randn(4, 8)
x2 = torch.randn(2, 16)
edge_index = torch.tensor([[0, 1, 2, 3], [0, 0, 1, 1]])  
conv = TransformerConv(in_channels=-1, out_channels=256)
out = conv(x1, edge_index)

so maybe your torch_geometric version is too old to support the lazy initialization of the layers (I don’t know when it was introduced)?

Thank you for the prompt reply.

My Pytorch version is 1.7.1 and my Pytorch geometric version is 1.6.3. I shall try to update them to the latest versions and try again.

I’ve executed my code snippet with PyTorch 1.10.1 and Geometric 2.0.3, so let’s see if updating helps. :slight_smile:

Thank you for the help. Updating did help and I am able to run the program my desired way now. Lazy inputs seem to be a more recent functionality.