Actual implementation of updateGradInput and accGradParameters functions

Where are the updateGradInput and accGradParameters functions of the most basic build-in nn layers (like linear, relu, max pool, etc.) implemented in the new torch.nn package?

Thanks

Hi,

That depends on the layers.
For the ones directly implemented in cpp, you can find implementations in THNN and THCUNN libraries.
Some on them are implemented as plain autograd functions and are just implemented as any function that is automatically differentiated through (and the the updateGradInput and AccGradParameters functions do not exist).

Thanks a lot! Yes, I’m interested in the primitives (that are not computed using autograd).
Do you know whether I can find python wrappers to THNN and THCUNN so that I can call updateGradInput and AccGradParameters directly from python?