Image Manipulator to Maximise the Difference Between Two Networks

I would like to train a “generator” network that receives an RGB image (e.g. from ImageNet) and generates an output image with these criteria:

  1. The output is as similar as possible to the original image.
  2. The accuracy of one network (e.g. resnet50) is dropped by this “output”.
  3. The accuracy of another network (e.g. inception_v3) remains intact by this “output”.

The snipped of code would be like this:

model_gen.zero_grad()
output_imgs = model_gen(input_imgs)
loss_gen = MSELoss(output_imgs, input_imgs)

output_n1 = resnet50(output_imgs)
loss_n1 = CrossEntropyLoss(output_n1, targets)

output_n2 = inception_v3(output_imgs)
loss_n2 = CrossEntropyLoss(output_n2, targets)

loss_all = loss_gen + (1 / loss_n1) + loss_n2
loss_all.backward()
optimizer.step()

With this procedure, my generator after 1 epoch lerans to apply only subtle changes to pixle values, therferore “loss_gen” becomes almost 0, however it deosn’t learn to increase “loss_n1” (since it’s in the dominator) and decrease “loss_n2”.

Essentially, my networks only learn to replicate the output images from input images, and accuracy of resnet50 and inception_v3 with respect to images of my generator remains in par.

I appreciate any suggestions along this line :slight_smile: