How to implement backward pass on custom loss?

Hello I am trying to implement custom loss function which has simillar architecture as huber loss.
That is, combination of multiple function.
In this case, I’ve heard that I should not rely on pytorch’s auto calculation and make a new backward pass. Though I cannot find any example code and cannot catch how I should return gradient tensor in function.
any help…?


If your loss is differentiable and the gradients you want are the ones that correspond to your forward pass, then you should use the autograd version.

If for performance reasons or because you want different gradients you need a custom backward, you can see this section of the doc about how to do it.

1 Like

For example lets say the truth Y can be 0 or 1. If the estimate y is 0 and the truth is 1, I want the loss to be 1. But if the estimate y is 1 and the truth is 0, I want the loss to be 2.
How to approach that loss function.