How to avoid sampling invalid negative edges in PyTorch-Geometric

I’m fairly new to PyTorch Geometric, and I’m trying to build my first graph variational autoencoder. I am minibatching the dataset, which concatenates the edge indices along the second dimension, such that its shape looks something like this: [2, n_edges1 + n_edges2 + …].

If I use torch_geometric.utils.negative_sampling, it seems to sample negative edges between nodes that belong to different graphs. But since each graph has its own set of positive and negative edges to train the autoencoder, sampling those invalid edges between nodes of different graphs will produce wrong gradient signals in training. How can I restrict the function’s behavior to avoid this?

Agreed. Do you manually recompute the metrics while dropping those negative edges?