I recently used CelebA dataset for image classification and used transfer learning for this purpose, but the loss was highly negative (-5341) with accuracy of just around 6-7% on 100 epochs. NLL was used as criterion. To find if there is any wrong with my code, I used the same code for CIFAR-10, it worked great with accuracy of around 90%.
For CelebA dataset, I categorized 128 images per class in 40 different folders. I used 40 different attributes of this dataset as classes. So, I found, most of the images are mutually inclusive between different classes.
My conclusion is that high loss was due to the imbalanced dataset that I had categorized into 40 different folders which were mutually inclusive images.
Can someone please correct my deduction?
Thank you in advance.