How could I access the implementation of the different loss and activation functions in pytorch?
I tried the obvious ‘show implementation’ (in PyCharm) but I hit a dead end since they look like stubs for C functions. I don’t mind if I have to look at C code, I just need to see how these functions are implemented. This is because I am using pytorch as reference to test my own nn implementation against, and I think some functions are being implemented slightly different than my implementation, leading to both nns diverging over large training sets.
If it’s in Declarations.cwrap, it tells you what the name of the function is in C. You can grep that code under aten/src/TH or aten/src/THC if you’re looking for a CUDA function
If It’s in native_functions.yaml, you can grep the name in aten/src/ATen/native