How to access implementation of loss and activation functions

How could I access the implementation of the different loss and activation functions in pytorch?

I tried the obvious ‘show implementation’ (in PyCharm) but I hit a dead end since they look like stubs for C functions. I don’t mind if I have to look at C code, I just need to see how these functions are implemented. This is because I am using pytorch as reference to test my own nn implementation against, and I think some functions are being implemented slightly different than my implementation, leading to both nns diverging over large training sets.

Try the following:

  1. Find the loss function in one of these two places:

If it’s in Declarations.cwrap, it tells you what the name of the function is in C. You can grep that code under aten/src/TH or aten/src/THC if you’re looking for a CUDA function

If It’s in native_functions.yaml, you can grep the name in aten/src/ATen/native

That didn’t seem to work, perhaps I’m looking wrong?
In cwraps I see for instance sigmoid (activation func). But there is no sigmoid in aten/src/TH.

And I don’t see the loss funcs in either crop or native_functions.yaml.

Sorry by “grep” I mean grep the contents of all files under aten/src/TH:

[0] richard@:~/pytorch/pytorch/aten/src/TH (master) $ ag -Q "THTensor_(sigmoid)
"
generic/THTensorMath.h
154:TH_API void THTensor_(sigmoid)(THTensor *r_, THTensor *t);