Optimizing the Wrong Model in Pytorch

I am somewhat new to this so please bear with me. Recently, I was training a bunch of models in Pytorch, when I ran into this problem. Suppose, I have some models m1 and m2.

m1 = modelClass1()
optimizer = torch.optim.Adam(model.parameters())
m1.train()
optimizer.step()

m2 = modelClass2()
m2.train()
optimizer.step()

I forgot to reinitialize my optimizer and the loss and accuracies were changing for my first model but not for my second model (which makes sense because I didn’t set the optimizer to optimize the 2nd model’s parameters).

Considering the modelClass1() and modelClass2() are different, why is it that there was no error or anything thrown when I forgot to optimize different models with different parameters. What exactly is optimizer doing with the parameters passed as an input to it. Was looking for some insight into understanding this.

When step is called, it will iterate over all the parameters you gave it and make a gradient step for these parameters.
In the case of your second step, since you did not zero out gradients, it will do another step using the previously computed gradients.