Custom Loss Function in a Differentiable Manner

Hello,

I try to create a custom loss function in a differentiable manner. For example I have two tensors:

a = [2, 8, 9, 10, 2] and b = [1, 2, 6, 7, 8] and my loss function should penalize a - b when a - b is positive.

if I write loss function as follows, I think it is not differentiable, because of “<” operation.

   diff = a - b
   diff[ diff < 0 ] = 0
   return diff

How can I create this loss function in a differentiable manner?

Thanks in advance.

It should work and pass a zero gradient to the masked elements:

a = torch.randn(2, 2, requires_grad=True)
b = torch.randn(2, 2)

diff = a - b
diff[ diff < 0 ] = 0
print(diff)
> tensor([[0.6480, 0.0000],
          [0.0000, 0.0272]], grad_fn=<IndexPutBackward>)

diff.mean().backward()
print(a.grad)
> tensor([[0.2500, 0.0000],
          [0.0000, 0.2500]])
1 Like