Variation of the same loss to optimize two different networks

Hi,

Coming from the same criterion I would like to generate two different losses to optimize two networks. The 2nd networks receives images and transform them to meet the requirements from the resnet18. Subsequently, I want the resnet to minimize its loss but I only want the 2nd network to minimize the loss coming from a particular class.

resnet = resnet18()
network_2 = other_network()
criterion = nn.CrossEntropyLoss()
optimizer1 = optim.Adam(resnet.parameters())
optimizer2 = optim.Adam(network_2.parameters())

generated_samples = network_2(inputs)

classifier_outputs = resnet(generated_samples)
loss_1 = criterion(classifier_outputs, labels)
optimizer1.zero_grad()
loss_1.backward(retain_graph=True)
optimizer1.step()

indexes = torch.from_numpy(np.where(labels.cpu() == 1)[0]) #Take only the samples of the class == 1
new_outputs = torch.index_select(classifier_outputs, 0, indexes)
new_labels = torch.index_select(labels, 0, indexes)
loss_2 = criterion(new_outputs, new_labels)
optimizer2.zero_grad()
loss_2.backward()
optimizer2.step()

But this is not working as it should because I think there exists a conflict between gradients which I am unable to understand.