I tried to implement my own custom loss based on the tutorial in extending autograd.

Here is the implementation outline:

```
class MyCustomLoss(Function):
def forward(self, input, target):
... # implementation
return loss # a single number (averaged loss over batch samples)
def backward(self, grad_output):
... # implementation
return grad_input
```

The forward function take an input from the previous layer and target which contains array of labels (categorical, possible value = {0,…,k-1}, k is the number of class).

In the backward function I write a gradient of the loss with respect to the input. When I run, I got an error says that it needs one more gradient. I assume that pytorch also require to also write the gradient of the loss with respect to the target, which in this case does not really make sense (target is a categorical variable), and we do not need that to backpropagate the gradient.

Here’s my code to run the implementation

```
inp = Variable(torch.randn(10,10).double(), requires_grad=True)
target = Variable(torch.randperm(10), requires_grad=False)
loss = MyCustomLoss()(inp, target)
loss.backward()
```

And here is the error message I get:

```
RuntimeError: MyCustomLoss returned an invalid number of gradient tensors (expected 2, but got 1)
```

Is there anything that I missed? How to correctly implement a custom loss?

Thank you.