I am debugging a code that has increasing GPU memory when looping, the code is like this
pbar = enumerate(dataloader) .. pbar = tqdm(pbar, total=nb) # progress bar optimizer.zero_grad() for i, (imgs, targets, paths, _) in pbar: imgs = imgs.to(device, non_blocking=True).float() / 255.0 .... loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs)
My questions are:
Do I need to explicitly del
imgsafter each for loop to release memory? So if I understand correctly, for each iteration, imgs get a new batch of images on CPU, then imgs gets reassigned to the copy on GPU. Then, for the next iteration, imgs is assigned to a new batch of images, so the GPU copy of data got automatically deleted because there was no reference to it?
for GPU copy of targets to
compute_loss_ota, it will be cleared automatically because it is a local variable?