Autograd function for a given model architecture

Dear all,

if we fix the model architecture, is it possible to extract the actual function that computes the gradient of the loss wrt the model’s parameters?

It can be interesting for learning purposes and also enable fast porting of the python code to FPGAs using HLS.

It can be interesting

Hi,

You can check this tool https://github.com/szagoruyko/pytorchviz/ to be able to see the graph that corresponds to the backward. But this won’t tell you what are the actual mathematical operations that are performed I’m afraid.

1 Like

Thanks for your reply, Alban.

Do you know if there’s a tool to extract the whole graph to be able to recode the backward pass as a C++ function?

Hi,

No there is no such tool I’m afraid.
Most of the backward functions are not even exposed as user APIs.

1 Like

I see, thank you anyway.