Issue with weight updates

I have a model that uses two preexisting models to extract representations. These representations are passed into a sequential model and the whole thing is trained end-to-end. Or should be, but the weights in the two representation extraction models aren’t updating. A simple example might look something like this:

class Net(nn.Module):
    def __init__(self, model1, model2):
        super(Net, self).__init__()
        self.model1 = model1
        self.model2 = model2
        self.classifier = nn.Sequential(nn.Linear(h1, h1), nn.ReLU(), nn.Linear(h1, h2))

    def forward(self, x):
        x1 = self.model1(x)
        x2 = self.model2(x)
        out = self.classifier(torch.cat([x1, x2], dim=1))
        return out       

Am I doing something obviously wrong here?

How did you add the parameters to the optimizer?

I did the following:

model = Net(model1, model2)
optimizer = optim.SGD([{'params': model.model1.parameters(), 'lr': 1e-4},
                    {'params': model.model2.parameters(), 'lr': 1e-4},
                    {'params': model.classifier.parameters(), 'lr': 0.1}],)

Nevermind! It was a false alarm. I made this mistake checking the updates. Leaving it here in case it helps someone else.