Correct way to calculate train and valid loss

I am a little bit confused as to how to calculate the train and valid_loss, searching the forums I think this might be the correct answer but I am still posting this qeustion as a kind of sanity check.

I first define my loss function, which has the default value reduction = “mean”

criterion = nn.CrossEntropyLoss()

Then I accumulating the total loss over all mini-batches with the running_loss variable and divide this variable with the total samples in my dataset.

# Train the model
for epoch in range(epochs):
    running_loss = 0.0
    running_corrects = 0
    running_total = 0
    for i, (inputs, labels) in enumerate(train_dataloader):
        inputs = inputs.to(device)
        labels = labels.to(device)

        optimizer.zero_grad()
        with torch.amp.autocast(device_type="cuda", dtype=torch.float16):
            outputs = new_model(inputs)
            loss = criterion(outputs, labels)
            scaler.scale(loss).backward()    # Scale the gradients
            scaler.step(optimizer)           # Update the model parameters
            scaler.update()                  # Update the scaler

        running_loss += loss.item() * inputs.size(0)
        _, predicted = torch.max(outputs.data, 1)
        running_total += labels.size(0)
        running_corrects += (predicted == labels).sum().item()

    # Calculate the training loss and training accuracy
    train_loss = running_loss / len(train_dataloader.dataset)
    train_accuracy = 100 * running_corrects / running_total

I then do the same with validation loss

    # evaluate on the validation set
    correct = 0
    total = 0
    val_loss = 0.0
    new_model.eval()
    with torch.no_grad():
        for data in valid_dataloader:
            images, labels = data
            images = images.to(device)
            labels = labels.to(device)

            outputs = new_model(images)
            _, predicted = torch.max(outputs.data, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
            val_loss += criterion(outputs, labels).item() * labels.size(0)

    # Calculate the validation accuracy and validation loss
    val_accuracy = 100 * correct / total
    val_loss /= len(valid_dataloader.dataset)

Yes, your approach looks correct as you are scaling the losses with the actual sample size in the current batch. Afterwards you are dividing it by the number of samples in the entire dataset to avoid a potential bias in these loss and accuracy calculations.

1 Like

Thank you for your answer!

A follow-up question, I am planning on concatenating my train and validation dataset into one final training set and evaluate my model on the test set. Can I use the same code to train and test my model?

Yes, this should be possible but note that you should teat your final model only once as it could be seen as a data leak otherwise.

I am new to deep learning and Pytorch and want to make sure that I am not making any crucial mistakes, that’s why I want to follow-up your answer one more time.

Do you mean by this that in my whole workflow I should use the test model once? Because after HP tuning, I will train my model on the concatenated dataset with 20 epoch for example and evaluate on the test set after each epoch, so that I can plot the train and test loss after training is finished. Is this acceptable or considered as data leakage?

I think this is alright if you don’t plan to retrain the model afterwards using the testing error or accuracy.
I would consider a retraining with another test dataset processing a data leak since you are now using “leaked” knowledge from your previous run and could try to repeat the entire experiment until the test accuracy reaches your desired threshold.

1 Like