Why does the random seed and optimizer have such big influence to the result?

Dear all,
Recently I run a simple code for classification on MNIST dataset, I found some times I got 98% accuracy just after 1 epoch and some times just 50% after one epoch. The result changed every time and the difference were big. Then I tried to set random seed constant, and tried different value. I found the result in different random seed value changed much. For example, I used Adadelta as optimizer and ran 1 epoch, when I set random seed to 1, the accuracy was 98% and when I set random seed to 1000, the accuracy became 69%.

Besides, when I use Adam as optimizer and run 1 epoch,the result are always bad( 60-70% accuracy, sometimes it can also reach 98% ) but use Adadelta the accuracy is usually good( 98% acuuracy). Optimizers are in default setting

I can understand they will influence the result but I just feel this influence are so big. Can anyone tell me why it is? I would be really appreciated. Thanks!

Your model might be quite sensitive to the current initialization of your parameters.
Are you using some the default init or did you initialize your parameters yourself?
In an optimal setup, the training should reach comparable accuracies for different random seeds, although the training time might differ a bit. You should not “optimize” the seed in any way.

It sounds like your model might be small and you reached 98% accuracy by chance in the first epoch.