BatchNorm Pytorch 1.0 on WIN10

I am just a beginner of pytorch. last week, I just installed my Pytorch 1.0 on my new laptop with WIN10 and started the journey of my pytorch. it looks so much different from TF. meanwhile, it also provide me the good chance of learning the pytorch from ABC by re write my previous TF file. however, I was struck in the BatchNorm function. I used the MNIST as the dataset. The scenario is that after I put the BatchNorm right after full connected and before the RELU, to my surprise, the accuracy dropped to 40%, which is 97% without BatchNorm.

last two days, I checked over and over. I noticed that the pytorch 1.0 might not bring the training into the test even I used the model.eval(). I would like anyone could advise me of how to use the batchnorm layer.