In standard PyTorch documentation, it says that `torch.nn`

considers the input as mini-batch. So, for one sample it suggests to use `input.unsqueeze(0)`

in order to add a fake batch. Is this the case for PyTorch Geometric `nn`

modules?

More specifically, I want to feed a fully-connected graph with 35 vertices and scalar edge weights to `NNConv`

layer. So, I represent this graph as `Data`

object where `Data.x`

is a 35x35 adjacency matrix, `Data.edge_index`

is 2 x 1225 tensor since it is fully-connected and `Data.edge_attr`

is a tensor of shape 1225 x 1 since, again, it is fully-connected and edge attributes are just scalar weights. I designed such a `NNConv`

layer and I am inputting not mini-batches but one sample to network.

```
nn = Sequential(Linear(1, 1225), ReLU())
self.conv1 = NNConv(35, 35, nn, aggr='mean', root_weight=True, bias=True)
```

in forward function

```
def forward(self, data):
x, edge_index, edge_attr = data.x, data.edge_index, data.edge_attr
x = F.sigmoid(self.conv11(self.conv1(x, edge_index, edge_attr)))
```

What I do not understand is do I need to add fake mini-batches. Is this correct or do I need to add `x.unsqueeze(0)`

, if so which of these `Data`

attributes (`x`

, `edge_index`

, `edge_attr`

) do need a `unsqueeze(0)`

. Thanks.