Different networks with same optimizer

Suppose I have 4 independent networks and only 1 optimizer

network_1 = network()
network_2 = network()
network_3 = network()
network_4 = network()

optimizer = torch.optim.Adam(list(network_1.parameters()) + 
                             list(network_2.parameters()) +
                             list(network_3.parameters()) +
                             list(network_4.parameters()), lr=0.0001)

input = torch.rand((2, 2, 2))
loss_1 = gt_1 - network_1(input)
loss_2 = gt_2 - network_2(input)
loss_3 = gt_3 - network_3(input)
loss_4 = gt_4 - network_4(input)
loss_1.backward()
loss_2.backward()
loss_3.backward()
loss_4.backward()
optimizer.step()

Loss of each network is computed independently, when doing the backpropagation, the gradient of each loss should only affect its corresponding network even if I use the same optimizer, right? i.e. network_1 parameters will be updated only based on the gradient computed from loss_1 and will NOT be affect by other losses.

Yes, independent computation graphs won’t have any effect on each other.