How to achieve tf.scatter_nd in pytorch? project for copynet

Hi, I want to implement a CopyNet with pytorch. But I encounter a problem now.
In tensorflow, there is scatter_nd operation

indices = tf.constant([0,3])
updates = tf.constant([0.2,0.6])
scatter = tf.scatter_nd(indices, updates, shape=4)
print scatter
[0.2 , 0 , 0 , 0.6]

as you can see, the index in indices fill the corresponding value in updates.

But I don’t know how to achieve this.
I try use torch.cuda.sparse.FloatTensor(indices,updates,size) replace, but it seems can’t backward the gradient.

My whole code as follow, can train, but p_gen become 1 gradually, so I doubt there is no gradient back for new_atten_prob

    p_gen = p_gen.unsqueeze(-1)
    normal_output_prob = p_gen.expand_as(output_prob) * output_prob
    normal_atten_prob = (1 - p_gen).expand_as(atten_prob) * atten_prob
    new_atten_list = []
    for input_token, atten_prob_token in zip(input.split(1), normal_atten_prob.split(1)):
        atten_prob_token = atten_prob_token.squeeze(0)
        new_atten_list.append(torch.cuda.sparse.FloatTensor(
            input_token.data, atten_prob_token.data, torch.Size([normal_output_prob.size(1)+30])).to_dense())
    new_atten_prob = torch.stack(new_atten_list, dim=0)
    new_normal_prob = torch.cat([normal_output_prob,Variable(torch.zeros(normal_output_prob.size(0),30).cuda())],-1)
    prob = (Variable(new_atten_prob,requires_grad=True) + new_normal_prob)
    return prob
1 Like

There is a scatter function in pytorch

http://pytorch.org/docs/master/tensors.html#torch.Tensor.scatter_

I’m not 100% sure about how the autograd mechanics work during assignment yet. But from my understanding when you create a new tensor using .data it loses the history of the variable. Hence why no backprop would occur.

As far as I know, there is no autograd way to do that. If you want to backward through this operation, you must extend autograd.Function and write the backward method by hand.

As far as I know, there is no autograd way to do that.

There’s no autograd through scatter?

There’s only torch.Tensor.scatter_(), that is a method of torch.Tensor. But no autograd function with backward method. So you can’t backward through it…

Damn, thanks for letting me know. I think I’ll be posting my own question about this after all.

I already asked it there: Edit a subtensor of a 3D tensor

But I think I will do this function by hand and create a pull request. It should not be difficult after all.

I have the same sort of problem. I’m implementing a copy mechanism and need to get all the probabilities into a larger matrix.

 attention_scores = Variable(torch.zeros(target_seq_len, batch_size, max_dict_index))
 for i in range(batch_size):
     for j in range(target_seq_len):
         for k in range(input_seq_len):
             attention_scores[j, i, input[k, i].data[0]] += attentions[j, i, k]

But this naturally doesn’t work because Variables don’t support in-place assignment. Is there another way to construct a complex matrix, as would be done with scatter, and still retain the autograd history?

@alexis-jacq Variable does implement the scatter function, implementation is here with the Function definition is here.

2 Likes

Oh, and I see below there’s a ScatterAdd function which would solve my issue of having repeated indices in the input sequence!

Wow great! I was looking for it in the docs, but I should have first looked in github sources. Thanks @albanD!