The result of the direct test after the model training is correct, but the model is saved as .pt after training, and the model after reloading the .pt is inaccurate.
Libtorch version 1.1.0
Same problem happened on my case. Training and Directly Inference get a normal and correct result, however, Training-Save and Load-Inference get a innormal and competely wrong result.
Any HELP?
Here is my flow path of my code, Step 6 output is same but Step 7 not and competely different:
Step 1 Model Initialization
Step 2 Model Training(printf:output result with an all-ones intput at last training step)
Step 3 torch::save(model,“D:\path\to\model.pt”);
Step 4 Initialization model2
Step 5 torch::load(model2,“D:\path\to\model.pt”);
Step 6 std::cout<<model.parameters()[169]<<std::endl;
std::cout<<model2.parameters()[169]<<std::endl;
Step 7 torch::Tensor t1=model->forward(torch::ones(1,1,224,224).to(torch::kCUDA));
torch::Tensor t2=model2->forward(torch::ones(1,1,224,224).to(torch::kCUDA));
Same problem happened on my case at Libtorch1.1.0
Training and Directly Inference get a normal and correct result, however, Training-Save and Load-Inference get a innormal and competely wrong result.
Any HELP?
Here is my flow path of my code, Step 6 output is same but Step 7 not and competely different:
Step 1 Model Initialization
Step 2 Model Training(printf:output result with an all-ones intput at last training step)
Step 3 torch::save(model,“D:\path\to\model.pt”);
Step 4 Initialization model2
Step 5 torch::load(model2,“D:\path\to\model.pt”);
Step 6 std::cout<<model.parameters()[169]<<std::endl;
std::cout<<model2.parameters()[169]<<std::endl;
Step 7 torch::Tensor t1=model->forward(torch::ones(1,1,224,224).to(torch::kCUDA));
torch::Tensor t2=model2->forward(torch::ones(1,1,224,224).to(torch::kCUDA));