Backpropagate through input data

I want to backpropagate through input to generate adversarial data. How do I train the model to do this task such that during training time, the image is not altered?

I guess you mean you have a first model that outputs what is going to be the input of a second model?

If you want to propagate an error from, let’s say, the output of the second model all the way to the first model, I think PyTorch should be doing that for you automatically,along. Of course that is if you are using PyTorch tensors all along and unless you have at some point in between some sort of detachment, or you have something like with torch.no_grad()

I’m not sure if I understood your problem, I hope it helps.

I am trying to train a classifier on the CIFAR Dataset first and then use this trained model to generate adversarial examples.
After training, I want to freeze the weights and pass a new image to the model with a target label and then optimise the input such that the probability of the target class become maximum.

I am trying to train a classifier on the CIFAR Dataset first and then use this trained model to generate adversarial examples.
After training, I want to freeze the weights and pass a new image to the model with a target label and then optimise the input such that the probability of the target class become maximum.

When you do the forward pass, all the gradients are computed. Not only the gradients for the model weights (which you will not need if you freeze de model), but all the gradients of the tensors that have been involved in computing the loss function (including the input image). So what you have to do is to declare an optimizer (Such as adam, SDG,…) , passing it the input image instead of the model’s weights as parameters to optimize.
Then when you do ‘optimizer.step()’, you are optimizing the input image to a version that minimizes the loss function you defined.

However, I still didn’t understand what you meant by “How do I train the model to do this task such that during training time, the image is not altered?”. The way I explained it, the input image is going to be altered. Isn’t that the whole point of this kind of training?

Sorry for the delay in the answer.

1 Like

Thank you for the answer. That was exactly what I am trying to do.
Just one question, how do I pass the image only to the optimizer? Should I make it a parameter or can I just pass the image tensor with requires_grad = True to the optimizer?

Instead of doing somthing like:
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
try
optimizer = torch.optim.Adam(inputTensor, lr=learning_rate)

And maybe, as you said, you should make sure requires_grad=True for the input tensor.

2 Likes

Thanks a lot for the help.

1 Like