I am training a model for multi-class classification . But, I am getting really irrelgular output, I also have tried changing the learning rate but it the effect is still the same.
Below is my code -
I am really not able to figure out how to try solving it.
From the graph above, it looks like the loss(assuming train loss) is barely changing. Can you give some more insight about the problem? what is the dataset used, image size etc.
Meanwhile, you can do the following:
- Check if the data is correct(valid ground truth, input to the network is correct).
- Try to overfit on a small subset of data.
- Try using a higher learning rate from the point where the loss stagnates.
Hi, Thanks for replying.
I have 10,777 images for a medical dataset, which I divided according to 80-20 rule.
There are 3 classes which I want to classify. Training set distribution is - 3287,2675,3737 images.
Dev and test set is - 179,180,180.
I am using transfer learning and using efficient net .
Image size is 2048,2048. I am resizing them to different sizes (64,64),(128,128),(224,224),(512,512),(1024,104) but the results are same in every dataset.
Okay. The dataset is well balanced. Like I said, you can check for annotations and input data correctness.
Also, I have not worked on efficient nets before. In general, 224x224 or 512x512 size should be good enough for image classification. You can also experiment with simple resnet50/densenet161 backbones to see if the network is learning or not.