How the backward() works for torch Variable?

Is it the gradient of the eventual downstream loss with respect to the current layer? So that in the case of a scalar loss which is also the “most downstream output/loss” we get dloss/dloss =1 but if we want to get backward() from some middle layer we have to provide the gradient of the downstream loss w.r.t. all the outputs of this middle layer (evaluated at the current values of those outputs) in order to get well defined numerical results. This makes sense to me and actually occurs in backprop.
In more technical terms. Let y be an arbitrary node in a computational graph If we call y.backward(arg) the argument arg to backward should be the gradient of the root of the computational graph with respect to y evaluated at a specific value of y (usually the current value of y). If y is a whole layer, this means that arg should provide a value for each neuron in y. If y is th final loss it is also the root of the graph and we get the usual scalar one as the only reasonable argument arg.

Am I getting there?

3 Likes