I have a tensor size of 16x16. I only want to compute the loss and update gradient in a region of interesting (ROI) of the tensor such as ROI[4:12,4:12]=1. I used the below code but it shows an error. How to do it? Thanks
you are replacing (and hiding) the real input (leaf node) of your graph. If you want to access the gradients of the input, you should keep the original input reference and rename the masked input:
Additionally, if you ever want to access the masked_input gradients, which is non-leaf Tensor, you will have to to call masked_input.retain_grad() after its creation.
Please find more info about it here
What do you mean by ignoring the region in mask? Which gradient should be 1? d.grad will be 1 since we run backward on it.
In d = torch.mean(masked_input) all the inputs (even the zero’d ones) will contribute to the mean, so the final gradients will be 1/(16*16) on the non masked elements, and 0 on the masked/zero’d ones.