I define my custom loss as follows:
def custom loss(target,output,mask):
output = torch.neg(output)
loss = torch.add(target,output)
loss = torch.pow(loss,2)
loss = torch.mul(loss,mask)
loss = torch.mean(loss)
I’ve seen many people use a class. But I also read a function would be fine. Would that loss work? target,output and mask are Variable.
Subsequently, in my training loop, I call loss.backward(). The program runs but I just want to confirm my implementation of the loss is correct.
Yes, your approach should work just fine.
The advantage of using a class is that it can hold internal attributes you would have to pass otherwise to your functional approach.
nn.CrossEntropyLoss, you could define the
reduction argument while creating an instance of this class. If you are using the functional method (
F.cross_entropy), you would need to pass these arguments in each call.
While this might be cumbersome in some cases, it might make the code cleaner e.g. if you need to change the
weight argument often.
Both APIs have some advantages and disadvantages and you can chose whatever feels right and makes the code clean.