Code abruptly ends without any reason

Hi,

model.eval()
with torch.no_grad():
    for idx, (images, targets) in tqdm(enumerate(train_dl), total=len(train_dl)):
        imgs, targs = model.transform(images, targets)
        fv = model.backbone(imgs.tensors.to(device))["0"].cpu()
        feature_vecs.append(fv)

While executing this code where model is a torchvision.models.detection.faster_rcnn and images and targets are correct.

This loop abruptly stops at 45/1039 in the dataset. I have lots of memory. So why does it randomly terminate?

Does it give an error? Depending on how big feature_vecs gets that could be the problem.