How can i train input (not weight)?

In GAN, i want to find latent vector z corresponding to the real image. One way to do this is to train z to minimize the error between sampled image and real image.

However, when i ran the code below, Error message appeared: “ValueError: can’t optimize a non-leaf Variable”.

targets             # target images of shape (batch_size, 3, 64, 64)

z = Variable(torch.randn(batch_size, 100), requires_grad=True).cuda()
optim = torch.optim.Adam([z], lr=0.01)            # This causes an error. (if i delete cuda() above, it solve the problem)

samples = generator(z)        # sampled images
loss = torch.mean((targets - samples)**2)

How can i solve this problem?



you can refer to this article


you have a mistake in this code.
It should be:

z = Variable(torch.randn(batch_size, 100).cuda(), requires_grad=True)
1 Like

In pytorch 0.4.0, I want to know how do we train this model where i assign one additional tensor
task_parameters = torch.ones(2).cuda().requires_grad_()
to learn the task weights via

precision = torch.exp(-1*self.task_parameters)
total_loss = loss * precision[0] + self.task_parameters[0]+ loss_quality * precision[1] + self.task_parameters[1]

Now, how do I update task_parameters in optimizer
optimizer = optim.SGD(itertools.chain(list(model.parameters()),[task_parameters]),, momentum=args.momentum)???

1 Like

@BestSonny task_parameters should be a leaf Tensor to be optimized, i.e. it’s .grad field should be filled up with gradients. Hence, you define it like this:

task_parameters = torch.ones(2, device='cuda', requires_grad=True)


task_parameters = torch.ones(2).cuda() # what you have in your code
task_parameters.detach().requires_grad_() # makes it a leaf variable and sets requires_grad
1 Like

Thank you very much.