Cannot backpropagate using networkx with pytorch

Hi,

We are trying to calculate a loss function using networkx functions. Since networkx works with numpy arrays, the computation graph is broken and we cannot backpropagate with the loss. The loss doesn’t have a grad_fn attribute. How can we overcome the issue without having to re-implement the backward function?

Thanks

1 Like

I have the exact same issue. I am looking for a solution without trying to implement backward or networkx operations by myself, I am kinda new to pytorch so i would appreciate a lot.

I’m not familiar with the networkx library, but based on the description you would need to implement the backward functions manually, if numpy arrays are used under the hood.
Alternatively, you could also try to use PyTorch tensors instead of numpy arrays (if all operations are supported), but this sounds as if you would have to rewrite the complete library.
Could you explain a bit what networkx is doing and what your exact use case is?

1 Like

We were using networkx to calculate centrality measurements of a given graph. However, we decided to move on with a simpler loss function using different ways of measuring centrality. Thanks for the answer !

1 Like