I have taken five classes of the ImageNet data set, feeded them into a pretrained AlexNet and want to calculate the accuracy of correctly classified images. With my code below, the loss is decreasing, but the accuracy is always between 20 and 30% (after 50 epochs). This behaviour confuses me extremely.
My five classes are in subfolders of root, as torchvision,datasets.ImageFolder wants it. Altogether are 6500 images.
batch_size = 32
num_epochs = 100
train_loader = data.DataLoader(dataset=train_dataset, batch_size=batch_size)
val_loader = data.DataLoader(dataset=val_dataset, batch_size=batch_size)
for images, labels in val_loader:
images = Variable(images)
# Forward pass only to get logits/output
outputs = model(images)
# Get predictions from the maximum value
_, predicted = torch.max(outputs.data, 1)
# Total number of labels
total += labels.size(0)
correct += (predicted == labels).sum()
accuracy = 100 * correct / total
# Print Loss
print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.data[0], accuracy))
Thanks in advance!