I have an image named
adv_patch which is a tensor.
I also have a batch of images with known bounding box locations, and a pretrained image detection network.
I want to apply this
adv_patch to the batch of images, meaning i have to rescale
adv_patch, rotate it, and put it on the image at each of the locations indicated by the bounding boxes.
The goal is to optimize
adv_patch by minimizing the loss of the network w.r.t.
This loss would be defined by how well the network still detects the objects in the images from my batch after
adv_patch being applied.
In the end i would like
adv_patch to be able to reliably ‘fool’ my detector.
As I understand, this would require me to use operations over which autograd can compute gradiënts in order to be able to backpropagate all the way back to
adv_patch. So the functions for rotating and rescaling in torchvision.transforms are not an option, as they perform only on PIL images and transforming would mean detaching from graph and no gradients
I had some success in using
grid_sample to rotate and rescale a single
adv_patch, but applying an
adv_patch on multiple detections on a batch of images is a whole different case.
Is there any way I could make this work?