One way of doing this is to create a nn.Module
class,
class RMSLELoss(nn.Module):
def __init__(self):
super().__init__()
self.mse = nn.MSELoss()
def forward(self, pred, actual):
return torch.sqrt(self.mse(torch.log(pred + 1), torch.log(actual + 1)))
The equation I took as reference here is from a Kaggle discussion.
and using it just like any other loss function provided by pytorch.
pred = torch.tensor([600.], requires_grad=True)
actual = torch.tensor([1000.])
criterion = RMSLELoss()
rmsle = criterion(pred, actual)
print(rmsle)
# tensor(0.5102, grad_fn=<SqrtBackward>)
I hope this helps.