I’m not 100% sure about how the autograd mechanics work during assignment yet. But from my understanding when you create a new tensor using .data it loses the history of the variable. Hence why no backprop would occur.

As far as I know, there is no autograd way to do that. If you want to backward through this operation, you must extend autograd.Function and write the backward method by hand.

There’s only torch.Tensor.scatter_(), that is a method of torch.Tensor. But no autograd function with backward method. So you can’t backward through it…

I have the same sort of problem. I’m implementing a copy mechanism and need to get all the probabilities into a larger matrix.

attention_scores = Variable(torch.zeros(target_seq_len, batch_size, max_dict_index))
for i in range(batch_size):
for j in range(target_seq_len):
for k in range(input_seq_len):
attention_scores[j, i, input[k, i].data[0]] += attentions[j, i, k]

But this naturally doesn’t work because Variables don’t support in-place assignment. Is there another way to construct a complex matrix, as would be done with scatter, and still retain the autograd history?