Backpropagating through scatter_add_

Hey, I am trying to move a certain point coordinate through the offset output by the network ONet, and then calculate the loss with the moved point map and ground truth. This process backpropagates and updates the ONet parameters.

Here is a brief part of the code.

import torch
from torch.autograd import Variable
import numpy as np

c,w,h = img.shape

ori_point = Variable(torch.from_numpy(np.array([10,10], dtype=np.float32)), requires_grad=True)

offset = ONet(img)  # (b,1,2)

point_h = offset[0,0,0] + ori_point[0]
point_w = offset[0,0,1] + ori_point[1]

point_index = point_h * w + point_w

dot_map = torch.zeros(w * h).scatter_add_(0, index=point_index.long(), src=torch.ones(w * h)).view(h, w)

loss = criterion(dot_map, gt)
loss.backward()
...

I want to use scatter_add_() to map the point coordinates to a dot map, but my point coordinates are in the form of index, so when the loss is backpropagated, it seems that the parameters of Onet will not be updated.

What is the correct way of doing this?

x = conv(x)
p.scatter_add_(2, index=x, data)

Simply put, will gradients flow correctly to the weights of conv, if the loss is calculated using p?

Hi Wenjie!

No. scatter_add_() requires that its index argument be a tensor
of type int64. Gradients don’t (and can’t) flow back through integers.

(Gradients will flow back through self (your p) and src (your data)
if they have requires_grad = True.)

Best.

K. Frank