I’m trying to create my own (slightly different) implementation of nn.Embedding
.
The simplest form of the code is just
def forward(self, indices):
return self.table[indices]
Which converts a batch of indices to a batch of vectors.
Now I would like the gradient to be sparse, like with nn.Embedding
, so I can optimize with SparseAdam.
Is there a way to do this with autograd?
Or do I have to implement my own backward
method?