C extension loss function


I’m having some trouble about c extension in pytorch.

For a hidden layer, In the function class:

class myFunction(torch.autograd.Function):
     def __init__(self):

     def forward(self, input):
          #call c code
          #calculate the output
          return output

     def backward(self, grad_output):
         #call c code
         #calculate the grad_input
         return grad_input

My question is what is the grad_output when the layer comes to be a loss layer, since loss is the final layer. Is it “1”?
And how can I write the backward code.


Please check the autograd doc on how to write a new Function.
For a loss, yes, the grad_ouput is just 1. Your output should be a tensor with a single value, and grad_output will as well be a tensor with a single value. Remember that loss.backward() is actually just a shortcut for loss.backward(torch.Tensor([1])).
The backward code should be an implementation of the derivative of your loss function: if your loss is output = f(input). Then your backward function should compute grad_input = df/dinput * grad_output.

Got it! Thanks a lot for your help.