My model trains well with a dataset using MSELoss, L1Loss, and SmoothL1Loss loss functions. I wrote a custom loss function that could train a model on a small subsample, but does not train the model with the full dataset. The custom loss for the data does not change over many epochs and for good measure I calculated the L1Loss at the end of each epoch and that did not change. Here is the loss function class. Is there anything wrong with it?
class ProportionalLoss(nn.Module):
def init(self):
super(ProportionalLoss, self).init()
def forward(self, inputs, targets):
labels = torch.log10(targets)
predictions = torch.log10(torch.clamp(inputs, min=1, max=2500))
loss = F.l1_loss(predictions, labels, reduction=‘mean’)
return loss