How to get gradients with respect to input and change input (rather than trainable vars) to minimize loss

Hi all.
I want to change the input so it minimizes the loss (rather than changing the trainable variables ). in other words i want update input instead of weight and bias.
I use this code for iteration:

data_batch = Variable(data_batch, requires_grad=True)
prediction = model(data_batch)
loss = loss_function(prediction, fitness_batch)
loss…backward(retain_graph=False)
grad_input = data_batch.grad
print(“grad”,grad_input)

grad_input is not none and have the value ,but input not change.
I dont know what is my mistake. please help me .

Variables are deprecated since PyTorch 0.4 so you should use tensors now.

You would have to pass the input tensor to an optimizer, so that it can update the input (similar like you pass the model.parameters() to an optimizer).
The same work flow applies as usual, i.e. you would zero out the gradients, perform the forward and backward pass, and call optimizer.step() to update the input.

thanks for reply.
I dont understand how to use tensor instead of Variable ? how to define ( requires_grad=True) for input?
I have another question, when I want gradient with respect to input and not change W and B, do I set param.requires_grad = False or True.
(for param in model.parameters():
param.requires_grad = False )
when I update w and b is true, but I dont know here.

I use:
data_batch.requires_grad= True
instead of
data_batch = Variable(data_batch, requires_grad=True)
is correct?

my problem was solved. thanks for your guidance.
I used from it:

1 Like