I’m doing some custom optimization, and I am stuck on autograd for my embeddings. Isolating my problem:
embeds = nn.Embedding(12, 2)
a = embeds(Variable(torch.LongTensor([0])))
b = a.dot(a)
b.backward()
print(a.requires_grad)
print(a.grad)
Which prints True, and None.
I have also tried an index select, and selecting as embeds.weight[0], with the same result. Works fine if I replace “a = Variable(torch.Tensor(2, ), requires_grad=True)”. I suspect I am doing something wrong at the embedding end.
At this point I need to put my hand up for some help.
Thanks very much.