NLLLoss still mysterious

Hi,

I am having troubles to implement a compute_loss

criterion = nn.NLLLoss(reduction='none')

def compute_loss(decoder_outputs, pad_target_seqs, padding_value=0):
    """
    Args:
      decoder_outputs (tensor): Tensor of log-probabilities of words produced by the decoder
                                (shape [max_seq_length, batch_size, output_dictionary_size])
      pad_target_seqs (tensor): Tensor of words (word indices) of the target sentence (padded with `padding_value`).
                                 The shape is [max_seq_length, batch_size, 1]
      padding_value (int):      Padding value. Keep the default one: the default padding value never
                                 appears in real sequences.
    """
    
    # How many target sequences do we have?
    N_seqs = pad_target_seqs.size(1)
    
    # Converted targets
    targets = []
    
    for i in range(N_seqs):
        
        target = pad_target_seqs[:, i, :].squeeze(1)
        print(target) #Just to check
        targets.append(target)
   
    # Loss computing
    loss_sum = 0
    
    for seq_idx, seq in enumerate(targets):
        for word_idx, word in enumerate(seq):
            loss_sum += criterion(decoder_outputs[word_idx][seq_idx], word)

I keep getting the following error:

ValueError: Expected 2 or more dimensions (got 1)

My questions:

  1. Shouldn’t the target be a scalar that is the word index?

  2. Could the ValueError come from decoder_outputs[word_idx][seq_idx]? I don’t really understand the documentation, where they ask the input to be [NxC]. How does that apply to my problem?

Thanks for helping!

1 Like