Calculate macro averages for accuracy and loss in validation

Hi,

I have an imbalanced training set and currently using the focal loss to train it. I also have an even more imbalaced validation set. I would like to get a better representation of the true loss according to each class. At the moment, I simply calculate the validation accuracy and loss per epocch and graph this out on tensorboard. I was wondering if there wasa way of calculating “macro” averages. This would allow me to better judge which model to save…

model.eval()
        val_loss = 0
        val_accuracy = 0
      

        with torch.no_grad(): # Tell torch not to calculate gradients
            for i,(val_inputs, paths, val_labels) in enumerate(dataloaders_dict['val']):
                val_inputs = val_inputs.to(device)
                val_labels = val_labels.to(device)
                val_outputs = model(val_inputs)
                val_loss += criterion(val_outputs, val_labels).item()

                _val, val_pred = torch.max(val_outputs, 1)
                val_equality_check = (val_labels.data == val_pred)
                val_accuracy += val_equality_check.type(torch.FloatTensor).mean()

                val_predictions += val_pred.items
                val_groundtruth += val_labels.data


        val_losses = val_loss / len(dataloaders_dict['val'])
        validation_accuracy = val_accuracy/ len(dataloaders_dict['val'])