I have a question about training in pytorch.
There is a pseudo code that I wanna ask.
model1 = Net() model2 = copy.deepcopy(model1) optimizer1 = torch.optim(model1.parameters(), lr) optimizer2 = torch.optim(model2.parameters(), lr) for i in range(dataset): x = random(dataset[i]) y = dataset[i] update_step1(model1, optimizer1, x, y) update_step2(model2, optimizer2, x, y)
I thought this would be updated separately in each function(update_step1, 2), but the two values were the same.
How should I do to train model at the same time?
Note that the reason for doing this at the same time is to give the same random X value both models.