I’m new to Pytorch so I apologize if this is an obvious question.
I’m training in batch. Each item in the batch has N vectors (these are RNN embeddings of sequences); they may have a different number of vectors, i.e. first item might have 2, second might have 7, and so on.
For each item in the batch, I want to multiply each of the RNN embeddings against another vector unique to that batch item. I.e.,
for i in range(batch_size): rnn_embs = batch_rnn_embs[i] # Variable of shape N_i x H vecs = batch_vecs[i] # Variable of shape H x 1 scores = rnn_embs.mm(vecs) # Variable of shape N_i x 1
However, because these are in batch, I eventually want the scores vector to be the same length for everything in the batch, i.e. some N_max (with the “padding” value -inf). I tried the following:
padding = torch.FloatTensor((pad_length, 1)).fill_(-float('inf')) padded_scores = torch.cat([scores, padding], 1)
But I get an error: “expected a Variable argument, but got torch.cuda.FloatTensor”. I don’t want the padding to be a learnable Variable. It seems I can’t concatenate a tensor and a variable.
I’m also wondering if I could simply extend scores to be of length N_max, then set the padding values using masked_fill. But I’m also not sure how to extend a Variable.
The last resort would be using bmm. It’s possible, but this makes everything a bit chunkier especially on the non-Pytorch side (need to compute masking/padding ahead of time and make sure it works in batch properly). I thought it would be easier to do this kind of thing in Pytorch (as opposed to Tensorflow).