# How to compute loss in ROI region of a tensor?

I have a tensor size of 16x16. I only want to compute the loss and update gradient in a region of interesting (ROI) of the tensor such as ROI[4:12,4:12]=1. I used the below code but it shows an error. How to do it? Thanks

``````import torch

print (input)

d = torch.mean(input)
d.backward()
``````

Error is

``````AttributeError                            Traceback (most recent call last)
<ipython-input-35-9826ca6fcef5> in <module>()
10 d = torch.mean(input)
11 d.backward()

AttributeError: 'NoneType' object has no attribute 'data'
``````

It looks that the input.grad is None

Hi John,

Here,

``````input = mask* input
``````

you are replacing (and hiding) the real input (leaf node) of your graph. If you want to access the gradients of the input, you should keep the original `input` reference and rename the masked `input`:

``````import torch

print(input)
d.backward()
``````

Hope that helps!

Additionally, if you ever want to access the `masked_input` gradients, which is non-leaf Tensor, you will have to to call `masked_input.retain_grad()` after its creation.
What do you mean by ignoring the region in mask? Which gradient should be 1? `d.grad` will be 1 since we run backward on it.
In `d = torch.mean(masked_input)` all the inputs (even the zero’d ones) will contribute to the mean, so the final gradients will be `1/(16*16)` on the non masked elements, and 0 on the masked/zero’d ones.