I am training a multitask model in which I have some classification and some regression tasks.
So I am using two loss functions:
loss_function_reg = nn.MSELoss()
loss_function_clf = nn.BCEWithLogitsLoss()
and combine them:
loss_reg = loss_function_reg(prediction_reg, batch[1].cuda())
loss_clf = loss_function_clf(prediction_clf.flatten(), batch[2].cuda())
loss = loss_reg + loss_clf
loss.backward(loss)
I have way more regression tasks than I have classification tasks. Should I weight for that? Because the loss obtained from either loss function is the average across all tasks. So the sum of the two losses is âbiasedâ towards the loss function with less variables/tasks. Or Did I just overthink that?