How to skip parameter updating

Hello.

I am creating an architecture which uses two networks(N1, N2).
N1 is already trained, so I want to use the output of N1 as input of N2.
However, I don’t want to update the parameters on N1 in the process.(I want to train only N2)

How could it be possible?

You can just give the parameters of N2 to the optimizer that you choose.

The code can look like

optimizer = torch.optim.Adam(N2.parameters(), blah blah)
input = Variable(sth)
N1_out = N1(input)
N2_out = N2(N1_out)
loss = compute_loss(N2_out)
...