Understanding PyTorch geometric add aggregation function during message passing

Just getting started with Pytorch-geometric.

Let’s say I have an undirected graph, with four nodes, each with a single feature, and I wish to implement the Graph Convolutional layer as shown in the documentation here:

https://pytorch-geometric.readthedocs.io/en/latest/notes/create_gnn.html

Following those instructions, my message function would return a tensor
norm.view(-1, 1) * x_j

Eventually I would like to recover a 4*1 tensor, where each entry is the normalised
sum of the features of itself and neighbouring nodes, as per the equation shown in the tutorial.

Step 5 of the tutorial is performed by the add function. How exactly does it go about this?
norm.view(-1, 1) * x_j Has dimension (# of edges including self loops, #of features)

How does the add aggregation infer which edge belongs to which node in this case, so that it can
perform the correct sum

What I believe happens is that the add function also gets passed the original edge index, which it can use to map the output of the message function, perhaps using torch_geometric.scatter, to create the correct sum