Basic pytorch geometric architecture for graph signal classification

Hi! I’m new to PyTorch geometric, but my understanding is that all available examples are usually around node/graph classification while I’d like to do a signal classification.

So, the feature matrix X will have (n,m) dimensions, y will be (1,n) and edges then (2,m).

An example could be a feature matrix where for every author we have information about being involved in a certain paper (then n authors and m papers). Then we know which paper cited which other paper (adjacency matrix m*m or edge list 2 by m) and we want to predict the department for each author. This way we have a graph of features that is exactly the same for every author.

So, I could reformulate this as a graph classification problem and just represent each author as a graph, but this does not look very efficient, especially if there are a lot of papers.

Could someone point me towards a code example with a similar approach? Or tell me what can be modified in usual examples to make it work for my problem?

Your use case might be related to e.g. genre prediction for movies or maybe even the revenue prediction given the movies and genre. I’m not sure how closely related these topics are, but know that the “genre prediction” use cases have been used in the past, so you might be able to find some related work in this field.

That being said, your geometric approach could also certainly work.

An example implemented with Spektral (for TF, unfortunately) could do graph signal classification. You may find the code here graph_signal_classification_mnist.py. One thing is that you may need to use self.flatten = Flatten() instead of using GlobalSumPool() of Spektral.

Even though I am keen to re-write the code by using PyTorch Geometric, I cannot find a dataloader similar to the one (i.e. mixedloader) used by Spektral in the case with only one graph and multiple graph signal instances.