How to freeze a network during training?

I have two networks: net1 and net2 and an input x. I want to feed the input x to the net1 to generate the pred x1. Then, the pred x1 is fed to the network 2 to generate pred x2.

I want to freeze network net1, while train the net2. The loss is computed as the mse between pred x1 and pred x2. I shows my implementation but I checked that the weights of net1 is still updated during training. How can I fix it?

net1=Model_net1()
net1.eval()
net2=Model_net2()
net2.train()
for param in net1.parameters():
            param.requires_grad = False
with torch.no_grad():  
    pred_x1= net1(x)
    pred_x2= net2(pred_x1)
    loss = mse(pred_x1, pred_x2)
    loss.backward()
    optimizer.step()

Could you try to detach pred_x1 before passing it to net2?
Also, did you pass the parameters of net1 to the optimizer and are you using weight decay?

Yes. I used the detach() also and did not use any optimize for net1

How did you check that net1's parameters were updated?

Sorry. I mistaken. I have check it and the parameter is not changed. Some bug in my code. Sorry bro