Back propagating loss for non-network tensors

Hi all, I’m not sure if my terminology will be correct, as I’m a bit new with ML. I’ve found these two related threads, but I’m not sure that my takeaway from them is correct, so I was wanting to get a bit more help if possible.


So essentially, what I have is a GAN such that the generator is producing a 256x256 tensor, that we will call noise.

I take a 256x256 subsection of a larger image that we’ll call patch, and I input it into the generator to get noise, I then add noise back to patch.

I’ll refer to protected_section = patch+noise

I then place protected_section back into the larger image that patch was taken from (call it x_). From there, I run this new image (with protected_section in it) through a DeepFake generator, and I want to compare the similarity of the output (y_) of that to a baseline Deepfake (y). The goal of the GAN is to find some noise such that there is a very noticeable distortion or change in the resulting DeepFake.

Is there a way that I can properly backpropagate the loss to the GAN even though when I compute the norm between y and y_, they didn’t come from the network itself?

My current solution, which was my takeaway from the above links was:

            y = Variable(original_deepfakes, requires_grad=True)
            y_ = Variable(swapfaces(protected_images, paths), requires_grad=True)
            norm_similarity = torch.abs(torch.dot(torch.flatten(y_ / torch.norm(y_, 2)), torch.flatten(y / torch.norm(y, 2))))

However, when running training, it doesn’t seem as if the loss between y and y_ is actually affecting anything, as no changes are actually taking place in the final DeepFakes, even though the noise is applied.

Any help would be greatly appreciated!