I want to do something about indexing with gradient, which I have review most of topic like Indexing a variable with a variable:
But my problem is a little different from them.For instance,

import torch
from torch.autograd import Variable
x = Variable(torch.randn(68, 2)) # x is an output from a network, which have 68 points location
heatmaps = Variable(torch.zeros(68, 64, 64), requires_grad=True)
for i in range(x.shape[0]):
heatmaps[i, x[i, 1].int(), x[i, 0].int()] = 1
target_heatmaps = Variable(torch.randn(68, 64, 64), requires_grad=True) # which is some meaningful label
loss = torch.sum(target_heatmaps - heatmaps)

How can I get the gradients of x, it seams difficult for one reason the gradient of x[,].int() is zero, and another is heatmaps[i, x[i, 1].int(), x[i, 0].int()] = 1 has assert like leaf variable has been moved into the graph interior.

heatmaps[i, x[i, 1].int(), x[i, 0].int()] = 1 has assert like leaf variable has been moved into the graph interior.

This is just because you modify heatmaps inplace while you created it your self.
You can use new_heatmaps = heatmaps.clone() and use new_heatmaps in the for-loop and your loss computation to avoid that.

it seams difficult for one reason the gradient of x[,].int() is zero

Gradients for integer values are not really possible as you donâ€™t even have a continuous function. You will need to have a softer version of this.
Like make your network output a mask of size 68, 64, 64 that goes through a softmax and use that as the target heatmap?

Thanks for your reply, I have realized my implementation is not work, even though I have get a gradient of x according topic like Indexing a variable with a variable.
Thanks for your patience again.