Where is implementation for common autograd operator

I want to know how pytorch implement autograd to ALLl operator, such as add, multiple, power and so on.


For python functions, users define their own autograd.Function. I don’t think there are much of these left though in the core except for things like Checkpointing.
For cpp implementations in the core, the aten library contains all the cpp functions and this file defines the mapping between the forward and backward functions.

1 Like

Thanks for your replies.
And I would like to know where is overriding apply method for these operator

This is some automatically generated code based on the yaml file linked above.
If you have a local install, you can find these in torch/csrc/autograd/generated/Functions.cpp.

1 Like