THe problem of incomplete last batch without dropping the last batch

I trained my recurrent model with a specific batch size but during the evaluation on the test set, I had the problem of the incomplete batch. For instance my bs=10 but I have 259 samples. Note that dropping the last is not viable to me as it is time-series data. Any suggestion for a solution without affecting the performance? Or is there a way to vary the batch and the keep the model working?

Not sure what you mean by variable batch size. If you are iterating over a batch loader it will automatically create a batch size of 9 for the last iteration which you can use.

Yes it create batch of 9 but I got an error of dimensions as the trained model accept batch of 10

Can you show a code snippet which you are using to do the evaluation, and also your model class ?

1 Like

def evaluate(model, test_dl, criterion):

model.eval()
epoch_loss = 0
epoch_score =0
with torch.no_grad():

     for inputs, labels in test_dl:
        src = inputs.to(device)
        trg = inputs.to(device)
        labels=labels.to(device)
        rec_inputs,trg_inputs,pred,attn_weights = model(src,0) #turn off teacher forcing
        #denormalize predictions 
        pred=pred*denormalization
        rul_loss = criterion(pred.squeeze(), labels)
        score = scoring_func(pred.squeeze()-labels)
    
        epoch_loss += rul_loss.item()
        epoch_score += score
return epoch_loss / len(test_dl),epoch_score

Currently, I have worked around the problem by padding the incomplete batch