Hi! I face the following problem:
I try to use custom loss (RMSLE)
class RMSLELoss(nn.Module): def __init__(self): super().__init__() self.mse = nn.MSELoss(reduction='none') def forward(self, pred, actual): return torch.sqrt(self.mse(torch.log(pred + 1), torch.log(actual + 1)))
during training I use loss.mean() to average the results
output = self._model(batch).squeeze(dim=1) target = batch['target'].to(self._device) loss = loss_func(output, target) loss = loss.mean()
So i just average the loss by myself, but after some iterations weights and outputs become nans.
But if i replace in init of RMSLE class to just MSELoss() with reduction as default- so it avarages by itself, all problems will go away. Why it happens?
If i use just MSELoss(), not my custom, the problem doesnt show again.