Train only one network out of two

Hi
I have two networks. The input data goes into network 1(to be trained), and then outputs data which goes to network 2(already trained).

The output of network 2 is the loss data, which is compared with the input data to network 1. How can I train network 1 in pytorch, so that the gradients of network 1 are only updated while the loss still backpropagates from network 2 to network 1?

I appreciate simple explanation, with an example code, if possible

Thanks
Sal

You can simply pass the data as explained from network 1 to 2:

# setup
network1 = nn.Linear(10, 5)
network2 = nn.Linear(5, 10)
optimizer = torch.optim.Adam(network1.parameters(), lr=1e-3)

# freeze network2
for param in network2.parameters():
    param.requires_grad = False

# input data
x = torch.randn(1, 10)

# training
for epoch in range(1000):
    optimizer.zero_grad()
        
    # forward pass
    out = network1(x)
    out = network2(out)
    
    # calculate loss
    loss = F.mse_loss(out, x)
    
    # calculate gradients
    loss.backward()
    
    # weight update
    optimizer.step()

Note that these networks are just a single linear layer and you would of course need to replace them with real models.

Thankyou for thd simple example code.It definitely helped.This is what I was hoping to seek.