Back propagation

I have trained a network with UNet algorithm. The input and output for my network are images with (512 x 512) pixels. I use L1loss for certain epochs and then I change the loss function to customized loss function. Now when I back-propagate with the custom loss, I want to update only certain pixels of the image and keep the other pixels as they are. How do I do it? as I have a convolution layer as my last layer. Incase of linearly connected layers, I can think of manually setting the gradients of few pixels to zero.

The solution I could think of is using a mask to remember the pixel you want to keep, then you first update all the pixels to get a new image I', then merge the original image I by

I_out = I * mask + I'

This would change the weights in the network for all the pixels (pixelsToBeChanged and pixelsNotToChange). When a next image is through the feedforward, it would give me altered values.

the loss is just for some pixels or all pixels? I don’t know how to help, but these two links may help

  1. Automatic differentiation package - torch.autograd — PyTorch 2.1 documentation
  2. GitHub - albertpumarola/GANimation: GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV'18 Oral) [PyTorch] (they use a mask to only change the ROIs)