Help! cuda memory keeps increasing during the loop, existing methods do not work!

            optimizer.zero_grad()
            for i_batch, sample in enumerate(dataloaders[phase]):
                if phase == 'train':
                    img_list = sample['image']
                    img_tensor = torch.stack([img for img in img_list]).squeeze(1)
                    inputs = Variable(img_tensor, requires_grad=False).cuda()
                    labels = Variable(sample['label']).cuda()
                    weights = sample['weight'].cuda()

                    outputs = model(inputs)
                    criterion = nn.BCELoss(weight=weights.float())
                    loss = criterion(outputs, labels.float())
                    running_loss += loss.data[0]
                    loss /= (batch_num * len(img_list))
                    loss.backward()
                    running_corrects += torch.sum((torch.max(outputs, 1)[0] > 0.5) == labels.byte()).float()
                    del inputs, labels, outputs
                    if i_batch % batch_num == (batch_num - 1):
                        train_num += batch_num
                        optimizer.step()
                        #ipdb.set_trace()
                        optimizer.zero_grad()
                        loss.detach()
                        print('batch: %d loss: %.4f acc: %.4f'
                            % (i_batch, running_loss / train_num,
                            running_corrects.data.cpu().numpy() / train_num))

Hi, the code follows the manner of iter_size (batch_num). For every iter_size, the code perform optimizer.step(). The problem is that cuda memory keeps increasing in coming iterations. I’ve tried methods mentioned in similar topics but they do not work. Anyone helps me ?

I’ve solved this problem which lies in running_corrects.

What was the problem?

“running_corrects += outputs” actually holds the graph in the CUDA memory which make the memory increase when running more iterations.

1 Like