What does dataloader do when # data does not divide batch size

I saw people using the following code:

    for i, (x_batch,) in enumerate(test_loader):
        y_pred = model(x_batch.float()).detach()      
        test_preds_fold[i * batch_size:(i+1) * batch_size] = y_pred.cpu().numpy()

where test_loader is a DataLoader. I know that drop_last is default to be false so this code does not look correct to me if the number of data does not divide the batch size. I would expect that it is possible for (i+1) * batch_size to exceed the capacity of test_preds_fold but when I tried running the code, there seems to be no problem. Now I am confused.

maybe this code can help you see what I think is happening:

import numpy as np
a = np.arange(10)
print(a[8: 16])

even if the range i * batch_size:(i+1) * batch_size is larger than the actual number of remaining values in test_preds_fold from the index i * batch_size, there will be no problem for assignement provided than the number of samples is equal to the length of test_preds_fold.