RAM Fills Up Despite Using Train Function From PyTorch

Hi There,
I’m trying to build an image classifier but there seems to be a persistent issue of memory filling. As soon as I execute this function, the memory starts to fill up and before it’s epoch 2, the kernel crashes.

The following is the training function:

def train_model(model, dataloaders, criterion, optimizer, num_epochs=25, is_inception=False):
    since = time.time()

    val_acc_history = []
    
    print("Device = ", device)
    
    best_model_wts = copy.deepcopy(model.state_dict())
    best_acc = 0.0

    for epoch in range(num_epochs):
        print('Epoch {}/{}'.format(epoch, num_epochs - 1))
        print('-' * 10)

        # Each epoch has a training and validation phase
        for phase in ['train', 'val']:
            if phase == 'train':
                model.train()  # Set model to training mode
            else:
                model.eval()   # Set model to evaluate mode

            running_loss = 0.0
            running_corrects = 0

            # Iterate over data.
            for inputs, labels in dataloaders[phase]:
                inputs = inputs.to(device)
                labels = labels.to(device)

                # zero the parameter gradients
                optimizer.zero_grad()

                # forward
                # track history if only in train
                with torch.set_grad_enabled(phase == 'train'):
                    # Get model outputs and calculate loss
                    # Special case for inception because in training it has an auxiliary output. In train
                    #   mode we calculate the loss by summing the final output and the auxiliary output
                    #   but in testing we only consider the final output.
                    if is_inception and phase == 'train':
                        # From https://discuss.pytorch.org/t/how-to-optimize-inception-model-with-auxiliary-classifiers/7958
                        outputs, aux_outputs = model(inputs)
                        loss1 = criterion(outputs, labels)
                        loss2 = criterion(aux_outputs, labels)
                        loss = loss1 + 0.4*loss2
                    else:
                        outputs = model(inputs)
                        loss = criterion(outputs, labels)
                        optimizer.zero_grad()

                    _, preds = torch.max(outputs, 1)
                    del inputs
                    del labels

                    # backward + optimize only if in training phase
                    if phase == 'train':
                        loss.backward()
                        optimizer.step()

                # statistics
                running_loss += loss.item() * inputs.size(0)
                running_corrects += torch.sum(preds == labels.data)
                
                loss.detach()

            epoch_loss = running_loss / len(dataloaders[phase].dataset)
            epoch_acc = running_corrects.double() / len(dataloaders[phase].dataset)

            print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))

            # deep copy the model
            if phase == 'val' and epoch_acc > best_acc:
                best_acc = epoch_acc
                best_model_wts = copy.deepcopy(model.state_dict())
            if phase == 'val':
                val_acc_history.append(epoch_acc)

        print()

    time_elapsed = time.time() - since
    print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
    print('Best val Acc: {:4f}'.format(best_acc))

    # load best model weights
    model.load_state_dict(best_model_wts)
    return model, val_acc_history

Then I print some parameters and move the model to GPU

model_ft, input_size = initialize_model(model_name, n_classes, use_pretrained = True)


# Move Model To GPU
model_ft = model_ft.to(device)

print("Params to learn:")

params_to_update = model_ft.parameters()
for name,param in model_ft.named_parameters():
    if param.requires_grad == True:
        print("\t",name)
            
optimizer_ft = optim.Adam(params_to_update, learning_rate)
criterion = nn.CrossEntropyLoss()

Using the very function defined above, I train this model, which then crashes my kernel.

trained_model, hist = train_model(model_ft, dataloaders, criterion, optimizer_ft, num_epochs = epochs)

print(model_ft)

Please tell me if further information is required so I can get this issue resolved.

Thanks in advance.

Could you remove the .data usage, as it’s not recommended and might yield unwanted side effects.
This shouldn’t create the increased memory usage, but is nevertheless recommended.
Also, the loss.detach() operation won’t have any effect, as it’s not an inplace operation (also shouldn’t create the memory issue).

To debug the increased memory usage, could you add print statements into your code and check, where the memory is increasing?
You could use:

print(torch.cuda.memory_allocated() / 1024**2)

Usually users forget to detach() tensors before storing them in lists, so that the complete computation graph will be stored as well.
However, the epoch_acc should already be detached. You could check if by printing the tensor’s .grad_fn.