Large linear increase in memory when predicting with torchvision.detection model in for loop

Hi all,

I have the following code

model = detection.fasterrcnn_resnet50_fpn(pretrained=True, progress=True,pretrained_backbone=True).to(DEVICE)
for i in tqdm(range(train.shape[0])):
    image = cv2.imread(train_img_paths[i])
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    image = image.transpose((2, 0, 1))
    image = image / 255.0
    image = np.expand_dims(image, axis=0)
    image = torch.FloatTensor(image)
    image = image.to(DEVICE)
    predictions = model(image)[0]

the images aren’t very big, between 200 and 800 pixels.

after about 30 images I reach 16GB of memory.
Is there a way to avoid that?

thank you for any tips and help, apologies if this is silly code!

Someone on stackoverflow provided me with a very simple solution:

wrap everything in
with torch.no_grad():