Use different final activations for different loss function for same model

I would like to use different final activations for different loss functions that train the same model. In code that would mean:

model = MyModel()
criterion1 = nn.BCELoss()
criterion2 = nn.L1Loss()

out1 = model(input)
out2 = torch.sigmoid(out1)

loss = criterion1(out1, target1) + criterion2(out2, target2)

loss.backward()
optimizer.step()

Is this possible and if yes, how? Thank you!

In this minimal example is might seem to make no sense, but in my code it is a GAN that uses multiple loss functions to train the generator if that helps…