In GAN, i want to find latent vector z corresponding to the real image. One way to do this is to train z to minimize the error between sampled image and real image.

However, when i ran the code below, Error message appeared: “ValueError: can’t optimize a non-leaf Variable”.

targets # target images of shape (batch_size, 3, 64, 64)
z = Variable(torch.randn(batch_size, 100), requires_grad=True).cuda()
optim = torch.optim.Adam([z], lr=0.01) # This causes an error. (if i delete cuda() above, it solve the problem)
samples = generator(z) # sampled images
loss = torch.mean((targets - samples)**2)
loss.backward()

In pytorch 0.4.0, I want to know how do we train this model where i assign one additional tensor
task_parameters = torch.ones(2).cuda().requires_grad_()
to learn the task weights via

Now, how do I update task_parameters in optimizer
optimizer = optim.SGD(itertools.chain(list(model.parameters()),[task_parameters]), lr=args.lr, momentum=args.momentum)???

@BestSonnytask_parameters should be a leaf Tensor to be optimized, i.e. it’s .grad field should be filled up with gradients. Hence, you define it like this:

task_parameters = torch.ones(2).cuda() # what you have in your code
task_parameters.detach().requires_grad_() # makes it a leaf variable and sets requires_grad