Call self-defined function twice - autograd not working

Hello @magz,

Note that what you see as torch.nn.relu is not a function instance, but rather a “factory” similar to what is done with Variables when you use somevar = Variable(...) ; b = somevar.someop(...).
If you look at the source code of Variable, you see what is done internally make Variable.someop work (which you can do manually as well).
The way the Function class then works is that you record on in the forward and compute the gradient in the backward at the point specified by the inputs of the forward. This used to be done in objects of the class but has been seperated to contexts for the new-style autograd that will allow higher order derivatives.

Best regards

Thomas