Validation accuracy fluctuating big amount after 5 epochs

Hello
I am using SENet-154 to classify with 10k images training and 1500 images validation into 7 classes. optimizer is SGD, lr=0.0001, momentum=.7
after 4-5 epochs the validation accuracy for one epoch is 60, on next epoch validation accuracy is 50, again in next epoch it is 61%. i freezed 80% imagenet pretrained weight.

Training Epoch: 6

Total Params: 113742455

Trainable Params: 14952711

Non Trainable Params: 98789744

how much percentage of pretrained weight should freezed?.
what could be the problem?

Thanks
Regards
Milton

I’d recommend starting with freezing all but the last layer.

In terms of debugging your fluctuating accuracy there are a few things to consider, first what is your training accuracy doing during those epochs? Is it continuously going up? Also, just to clarify, when you say one epoch, do you see all of your training and testing data in an epoch? If not, that could be a symptom of the error. Another thing I’d recommend doing for now is setting momentum = .0 to help debug the issue.

To help debug further could you add your full source code?

training accuracy going up but very slow. like for epoch increasing 1-2%. for every epoch i used all training images to feed the net.

I am trying with momentum set to zero and will let you know.
what should hyper-parameter setting for SGD and ADAM in the scenario?
should i try ADAM for low amount of data?

I would try a wider range of learning rates. Fit the model with 0.1, 0.01, 0.001, 0.0001 and see what sorts of outcomes that you get. Is the training accuracy comparable to the testing accuracy? Also, what is the prior for each of your training classes? Are they things that might be hard to distinguish? Are they things similar to the classes in imagenet? These are all questions that might impact how well training works in your model.