Where does a THNN_updateGradInput function get the input?

I want to understand how does ReLU backward, I find it is implemented by torch._C._nn.threshold, not autograd.Function object.

I think all object from torch._C._nn is defined by aten/src/Aten/nn.yaml, and a backward implementation of torch._C._nn.threshold is THNN_(Threshold_updateGradInput), however, I finds it accept THTensor *input, I do not know where does it get this tensor while it is not an autograd.Function which can save input as ctx.

an autograd.Function (sort of, but in C++) is generated from nn.yaml and other metadata, which saves the input and passes it to the Threshold_updateGradInput function.

If you have a local source build of PyTorch, looking at the file build/aten/src/ATen/CPUFloatType.cpp would help


What is other metadata?

other metdata is what I might’ve missed. Anything in Declarations.cwrap, generic/THNN.h (generic/THNN.h is written in a certain way, with structured comments in code about which arguments are buffers, which are inputs etc.)…

1 Like