Accuracy fluctuations between two different trainings


I am building a Bidirectional Lstm with attention network. I have optimised the network and my network gives in-consistent result on each train cycle and I have not changed the train parameters like learning rate, L2 regularisation etc. In a new training session I get an accuracy of 78% whereas when I try again from scratch then I get the accuracy as 17%. I know that accuracy range change between 5 % is acceptable but my model has a big difference. Do any one of the guy faced the same issue.