Use single optimizer for multiple nets

I am training a GAN and I have a discriminator, a generator and one more tensor which are being used for loss calculation.
Psuedo code:
other_tensor = torch.tensor(require_grad=True)
loss = disc_loss + gen_loss + f(other_tensor)
optimizer = Adam(list(gen.parameters) + list(self._measure.parameters), lr=1e-4)

But this shall not optimize the other_tensor. How should I add that to my optimizer?

Hi,

You can simply pass the other_tensor to the optimizer like other parameters.

For instance, let’s assume we have two models model0 and model1 and other_tensor to be optimized:

optim = torch.optim.Adam(params=itertools.chain(list(model0.parameters()), list(model1.parameters()), [other_tensor]))

Bests

Thanks @Nikronic, it worked that way. I am struck with another problem. I have created the optimizer and passed the parameters. Suppose in some training step I want to freeze one of the nets and only want to update weights of the others. Is that possible?

You can wrap the computation for the model you don’t want to update inside torch.no_grad() then no grad will be accumulated for those parameters inside enclosed inside .no_grad:

x = torch.tensor([3.])  # input
w = torch.tensor([1.]).requires_grad_(True)  # params of model0
b = torch.tensor([0.]).requires_grad_(True)  # params of model1

# now skip updating model0
with torch.no_grad():
  y = x * w
y = y + b

# update params
y.backward()
optim.step()  # this will only `model1` since `model0.grad = None` as it's like it has never being passed to optim object

# print some grads
print(w.grad) = None
print(b.grad) = # a tensor