I have two questions that are related. If I have a unbalanced dataset, how do I calculate the per-class-accuracy properly for each batch? And if I add class weight, should I then assume the dataset is balanced and calculate the accuracy the usual way?

In the following code snip I assume that the dataset is balanced:

for local_batch, local_labels in training_generator:
# Transfer the data to GPU/CPU
local_batch, local_labels = local_batch.to(device), local_labels.to(device)
# Initialize gradients, zero the parameter gradients
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
predictions = model(local_batch)
# calculate training accuracy
predicted_labels = predictions.argmax(dim=1, keepdim=True).squeeze()
num_train_correct = (predicted_labels == local_labels).sum().item()
train_accuracy = num_train_correct / local_batch.size(0)
...

What if my dataset is unbalanced? How do I then calculate the per-class-accuracy for each batch while training?

What if I add class weight to CrossEntropyLoss. Should I then assume the dataset is balanced and calculate the accuracy as below or is there a third way to calculate accuracy when class weight is applied?

If you are initializing weight for Cross Entropy with proportion to 1 over class prior (1/p_i) for each class, then you’re minimizing average recall over all class.
and accuracy is good estimation of average recall if you have plenty of data. If not, you should calculate average recall.
You can use ignite confusion matrix to calculate any classification metric you want.

Hi @mMagmer, so how would you calculate it in each batch? Could you give an example?

for epoch in n_epochs:
for local_batch, local_labels in training_generator:
# Transfer the data to GPU/CPU
local_batch, local_labels = local_batch.to(device), local_labels.to(device)
# Initialize gradients, zero the parameter gradients
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
predictions = model(local_batch)
# calculate training accuracy
predicted_labels = predictions.argmax(dim=1, keepdim=True).squeeze()
num_train_correct = ????
train_accuracy = ????
...```

cm.fill(0); # you can zero it every 10 batch for better estimation.
for i , j in zip(local_labels,predicted_labels):
cm[i,j] += 1
recall = cm.diagonal() / (cm.sum(axis=1) + 1e-15)
AvgRecall = recall.mean()

Ah yes that makes sense @mMagmer. Thanks.
But why does it give better estimation to zero it after 10 batch? Won’t the batch after it is set to zero be a bad estimation then?

And a second question:

I initialize it like this and set balance as parameter. Should I then consider the dataset balanced?

more sample you have, better the estimation. And yes, last batch is better off regarding error in recall estimation, but there is no way around.

it really depends on the distribution of the data, but you’re probably closer to being in balance with minimizing weighted Cross Entropy.

In my experience, it is good solution for unbalance ratio lower than 10.
Also using weight decay in optimizer helps (and other regularization technique)