Two different loss function within a batch

I have concatenated two different dataloaders in pytorch. My aim is to minimize loss1 for samples coming from dataloader1 and minimize loss2 for samples coming from dataloader2.

What would be the ideal/ best methods to do it in pytorch?

What methods have you tried so far? Are you trying to minimize the losses at the same time? If so you might want to define some combined loss, say, loss1 + loss2.

My ultimate aim is to minimise loss1 for some samples and loss2 for other samples.

I tried minimising a*loss1+b*loss2 for samples from dataloader 1 a=1,b=0 and samples from dataloader 2 a=0,b=1. I think this is an hacky solution and wanted know if some special provisions in pytorch exist.