Is there a way to generate and iterate successively through a computation graph in PyTorch, similar

to how Tensorflow returns a Graph (or GraphDef) object?

I understand you can use torchviz/graphviz to visualize the computation graph, but is there a way to get the graph in the finest granularity possible, i.e. the individual mult/add/etc. ops?

thanks!

When I say generate a computational graph, I mean the following:

How do I express my net as a function `f(x)`

such that `f`

just consists of the most basic algebraic operations (add, subtraction, multiplication, division, etc.)?

Case A: Imagine my model just consists of linear layers and activation functions (say ReLU). I can break down my linear layers with the ops *mult* and *add* (because a linear layer is just A*x + b, where x = input), but how would I break down ReLU into algebraic ops?

Case B: Imagine my model is much more complicated, and now consists of Convs, MaxPools, and TanHâ€™s (among other activation functions). How do I *break down* all of these into algebraic ops?

Given this, is there any way to trace the algebraic ops used in the net, starting from the input data to reach the output?

any help at all is appreciated. thanks!