How to make our own loss function in pytorch

I am reading this paper https://arxiv.org/pdf/1703.03130.pdf

I want to implement the loss function with regularization as given in the paper. Can I know how can I do that. Do I have to give the gradient formulae to the pytorch or it automatically differentiate and find the gradient.

If you can write that loss function in terms of already-implemented pytorch operators, then it will be automatically differentiable. If it’s the case, you can just write a nn.Module as if it were a new layer.
If you need to create a new operator you will have to code how the gradient is computed.

Anyway, several post about this topic has been written. Look for them.

1 Like

I think I can write it using pytorch operators. The loss is the sum of binary cross entropy and a regularization term which includes matrix norm and a pytorch operator is available for matrix norm.
My doubt is how can in code that ?
Like for usual binary cross entropy I did this
lossfn = nn.BCELoss()
lossfn(y_pred,y_true)
loss.backward()

The loss given in the paper is
Bceloss + matrix norm.
So how should I write that ?
How can I specify inputs to the loss

you can write something like:

class my_loss(nn.Module):
def init(self)
lossfn = nn.BCELoss()
loss2 = matrixloss()
forward(self,input/s):
x=lossfn(input/s)
y=loss2(input/s)
return x+y

1 Like