Custom Loss Function Fails

My model trains well with a dataset using MSELoss, L1Loss, and SmoothL1Loss loss functions. I wrote a custom loss function that could train a model on a small subsample, but does not train the model with the full dataset. The custom loss for the data does not change over many epochs and for good measure I calculated the L1Loss at the end of each epoch and that did not change. Here is the loss function class. Is there anything wrong with it?

class ProportionalLoss(nn.Module):
def init(self):
super(ProportionalLoss, self).init()

def forward(self, inputs, targets):
labels = torch.log10(targets)
predictions = torch.log10(torch.clamp(inputs, min=1, max=2500))
loss = F.l1_loss(predictions, labels, reduction=‘mean’)
return loss

Since your model is training with a small dataset and is failing with the entire set, you could check how the gradients (and their magnitudes etc.) behave in both approaches. Since the loss is apparently not changing after a while I would guess that the gradients might converge towards zero.

Thank you Patrick. I had checked the weights of the model and they quickly became static. Using the George Costanza theorem, I modified my code to do the exact opposite of what I was intending to do with the loss function…and it is training on the full dataset!

So, I confirmed that I did successfully code a loss function; there is just something wrong with my intent. I will keep researching.