I saw people using the following code:
for i, (x_batch,) in enumerate(test_loader):
y_pred = model(x_batch.float()).detach()
test_preds_fold[i * batch_size:(i+1) * batch_size] = y_pred.cpu().numpy()
where test_loader is a DataLoader. I know that drop_last is default to be false so this code does not look correct to me if the number of data does not divide the batch size. I would expect that it is possible for (i+1) * batch_size to exceed the capacity of test_preds_fold but when I tried running the code, there seems to be no problem. Now I am confused.