Handling input not multiple of batch size while training

How does loss is computed incase there is less number of instance say(5) in a batch of size say(10)?
What does trainloader manages it? How only the loss of only last 5 instance is calculated while loss computed? And what is the standard way of handling this?

Thanks in advance.

 for idx,data in tqdm(enumerate(trainloader),desc="Train epoch {}/{}".format(epoch + 1, EPOCH)):

    ids = data['ids_sen'].to(device,dtype = torch.long)
    mask = data['mask_sen'].to(device,dtype = torch.long)
    token_type_ids = data['token_type_ids_sen'].to(device,dtype = torch.long)
    targets = data['targets'].to(device,dtype = torch.float)

    t1 = (ids,mask,token_type_ids)
    optimizer.zero_grad()
    out, attn_t = text_model(t1,'last')

    if (epoch+1 == EPOCH):
      train_out.append((torch.transpose(out,0,1)).detach().cpu())
      train_true.append((torch.transpose(targets,0,1)).detach().cpu())

    loss = criterion1(out, targets)
    loss.backward()
    optimizer.step()
    if idx % 100 == 0:
      scheduler.step()

    
  loss_log1.append(np.average(l1))

If I understood right, you mean how is loss computed when the batch size is 5 when the batch size is always 10 for the previous data. For batch size of 5, while computing the loss. The average of the 5 is computed instead of 10. So it is total_loss/5