Recently I run a simple code for classification on MNIST dataset, I found some times I got 98% accuracy just after 1 epoch and some times just 50% after one epoch. The result changed every time and the difference were big. Then I tried to set random seed constant, and tried different value. I found the result in different random seed value changed much. For example, I used Adadelta as optimizer and ran 1 epoch, when I set random seed to 1, the accuracy was 98% and when I set random seed to 1000, the accuracy became 69%.
Besides, when I use Adam as optimizer and run 1 epoch,the result are always bad( 60-70% accuracy, sometimes it can also reach 98% ) but use Adadelta the accuracy is usually good( 98% acuuracy). Optimizers are in default setting
I can understand they will influence the result but I just feel this influence are so big. Can anyone tell me why it is? I would be really appreciated. Thanks!