Custom ImageLoader , accuracy is zero

Dear All,

I am trying to understand what am I doing wrong. I have several multi label data sets, all of which have the same structure on the local file system, which is compatible with torch’ .e.g.:
For the Mnist data set which has 10 different labels and hance 10 folders:
image
For the ucf101 data set which has 102 different labels and hence 102 folders:
image

For reference the code for this post is here:

When using the code on CIFAR 10, which is downloaded automatically, e.g.:

Learning works great and both loss and accuracy get better over time.

However for Mnist/ucf101/ any other data set which I store locally, learning does not happen at all.

Questions:

  1. Is it true that the default torch ImageLoader takes care of the labels for the classification without the need to one hot encode the labels? I am using CrossEntropyLoss and returning FC in forward:
    image

  1. In case I am doing it right, where is my mistake in the code? Why CIFAR works well? while local data sets which confirm to the multi label folder convension do not work?

Many thanks,

The code I use is here:

I just modified the DataLoader to an ImageLoader and provided a path to locally hosted Mnist.

@smth … I really need help here, the exact same issue on several different data sets.
Thanks :slight_smile:

does the training loss go down at all? or both training loss and validation accuracy are bad?

Both are bad, I think it is related to the labels.

yea if training loss doesn’t go down it’s definitely the labels. just print out the labels / add some printfs in folder.py of torchvision

Here it is:


I tried both ImageFolder and my own Custom DataLoader.