I want to increase the specific loss while learning the model.
I have the following code:
criterion_MSE = nn.MSELoss() criterion_CE = nn.CrossEntropyLoss() ... loss_MSE = criterion_MSE(predict1, target1) ... loss_CE = criterion_CE(predict2, target2) total_loss = loss_MSE + loss_CE total_loss.backward() optimizer.step()
If I want to maximize the loss_CE here (with minimize loss_MSE), just multiply the loss_CE by negative one?
If not, please tell me how I can do it.