Hey, I am trying to move a certain point coordinate through the offset output by the network ONet, and then calculate the loss with the moved point map and ground truth. This process backpropagates and updates the ONet parameters.

Here is a brief part of the code.

```
import torch
from torch.autograd import Variable
import numpy as np
c,w,h = img.shape
ori_point = Variable(torch.from_numpy(np.array([10,10], dtype=np.float32)), requires_grad=True)
offset = ONet(img) # (b,1,2)
point_h = offset[0,0,0] + ori_point[0]
point_w = offset[0,0,1] + ori_point[1]
point_index = point_h * w + point_w
dot_map = torch.zeros(w * h).scatter_add_(0, index=point_index.long(), src=torch.ones(w * h)).view(h, w)
loss = criterion(dot_map, gt)
loss.backward()
...
```

I want to use scatter_add_() to map the point coordinates to a dot map, but my point coordinates are in the form of index, so when the loss is backpropagated, it seems that the parameters of Onet will not be updated.

What is the correct way of doing this?