Hi all.

I have a trained model, and I am trying to optimize the input (s.t. loss(output) will decrease). I saw some similar topics, but none of them solved my problem.

I have the following flow:

Where A is the only image I want to optimize.

My code:

```
model.eval()
Y = Y.cuda()
A = torch.tensor(A, requires_grad=True)
B = torch.tensor(B, requires_grad=False)
data_optimizer = torch.optim.Adam([A], lr=1.0)
C = torch.cat((A, B), 1)
C = torch.tensor(C, requires_grad=True).cuda()
for i in range(0, iter):
Y_pred = model(C)
loss = myloss(Y_pred, Y)
print('loss: '+str(loss.item()))
print('random_val: '+str(A[0, 0, 500, 500]))
data_optimizer.zero_grad()
loss.backward()
data_optimizer.step()
```

I print the loss (which is just noisy) and a random pixel from image (which doesn’t change).

My question is why A doesn’t change in this process?

When I pass C to the optimizer (which is cat of A and B), the prints show that A changes, but B changes as well (which is not acceptable).

Any advice?

Thank you.