About Save Files(.pt) and Reload Inference Accuracy

There is a problem happened on my case at Libtorch1.1.0
Training and Directly Inference get a normal and correct result, however, Training-Save and Load-Inference get a innormal and absolutely wrong result.
Any HELP?

Here is my flow path of my code, Step 6 output is same but Step 7 not and different:
Step 1 Model Initialization
Step 2 Model Training(printf:output result with an all-ones intput at last training step)
Step 3 torch::save(model,“D:\path\to\model.pt”);
Step 4 Initialization model2
Step 5 torch::load(model2,“D:\path\to\model.pt”);
Step 6 std::cout<parameters()[i]<<std::endl;
std::cout<parameters()[i]<<std::endl;
Step 7 torch::Tensor t1=model->forward(torch::ones(1,1,224,224).to(torch::kCUDA));
torch::Tensor t2=model2->forward(torch::ones(1,1,224,224).to(torch::kCUDA));

:失望的:
me too
What can i do?