I am training a GAN and I have a discriminator, a generator and one more tensor which are being used for loss calculation.

Psuedo code:

other_tensor = torch.tensor(require_grad=True)

loss = disc_loss + gen_loss + f(other_tensor)

optimizer = Adam(list(gen.parameters) + list(self._measure.parameters), lr=1e-4)

But this shall not optimize the other_tensor. How should I add that to my optimizer?