[GPU Memory] Do I need to explicitly del local tensor that is copied to GPU?

I am debugging a code that has increasing GPU memory when looping, the code is like this

  pbar = enumerate(dataloader)
.. 
  pbar = tqdm(pbar, total=nb)  # progress bar
  optimizer.zero_grad()
  for i, (imgs, targets, paths, _) in pbar: 
       imgs = imgs.to(device, non_blocking=True).float() / 255.0
       ....
      loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs)

My questions are:

  1. Do I need to explicitly del imgs after each for loop to release memory? So if I understand correctly, for each iteration, imgs get a new batch of images on CPU, then imgs gets reassigned to the copy on GPU. Then, for the next iteration, imgs is assigned to a new batch of images, so the GPU copy of data got automatically deleted because there was no reference to it?

  2. for GPU copy of targets to compute_loss_ota, it will be cleared automatically because it is a local variable?

Thanks!

  1. Yes, imgs will be deleted unless you are explicitly storing a reference to the “old” imgs tensor somewhere.
  2. Also yes, unless you store a reference to it (same as 1).

Usually, an increase in memory usage is caused by storing the tensors unknowingly e.g. by accumulating them into a single tensor or by appending them to a list etc.