How to calculate loss per epoch

Could someone please tell me which of the two methods is the correct way to calculate the loss?
Below you can find my training loop.

    train_losses = 0
    valid_losses = 0
    avg_train_losses = []
    avg_valid_losses = [] 
    
    iteration = 0

    for epoch in range(num_epochs):
        
        model.train() #setting model in train mode
        train_data = reader.train_data(shuffle=True)
        y_pred = []
        y_true  = []
  
        for i in range(int(len(reader.train)/batch_size)):
            images, labels = get_batch(train_data, batch_size)
            
            # transform numpy arrays to torch tensors
            images = torch.from_numpy(images)
            labels = torch.from_numpy(labels)
    
            # move to GPU
            images, labels = images.to(device, dtype=torch.float), labels.to(device, dtype=torch.float)

            # Clear gradients w.r.t. parameters
            optimizer.zero_grad()

            # Reshape images
            images = images.view(-1, 256,256, 3)
            labels = labels.view(-1,1)            
            images = images.view(-1, 3, 256, 256)


            # Forward pass only to get logits/output
            outputs = model(images)

            # Calculate Loss: sigmoid BCELoss
            loss = criterion(outputs, labels)
            
            # Getting gradients w.r.t. parameters
            loss.backward()
            
            # train AUC
            predicted_train = outputs.view(-1)

            # Updating parameters
            optimizer.step()
            
            # detach from GPU
            loss = loss.detach().cpu().item()   
            # add to train_losses
            train_losses += loss
            
            #move to cpu
            images, labels, predicted_train = images.to("cpu", dtype=torch.float), labels.to("cpu", dtype=torch.float), predicted_train.to("cpu", dtype=torch.float)

            labels = labels.detach().numpy()
            predicted_train = predicted_train.detach().numpy()
            
            # add predictions and labels to list to later calculate AUC
            for i in range(labels.shape[0]):
                y_true.append(labels[i])
                y_pred.append(predicted_train[i])
    
        # calculate AUC of ROC
        roc_auc_train = roc_auc_score(y_true, y_pred)
            
        # print training statistics 
        train_loss = train_losses/len(reader.train)
        avg_valid_losses.append(train_loss)

Or do I have to calculate the variable train_losses like this:

 train_losses += loss * images.size(0) 

You can sum all training or test loss. something like this
test_loss = test_loss + loss.item()

After that, you can simply divide the total loss by the total number of batches per epoch.

alpha = (len(test_loader.dataset))/ batchsize
print(test_loss)
test_loss /= alpha

Hi!
Thanks for your reply! So this is basically the same as doing the following right:

train_losses += loss.item() * images.size(0)
train_loss = train_losses/len(reader.train)

With images.size(0) corresponding to the batch_size.

Yes, it looks similar.