111408
(haodong)
March 30, 2024, 1:51pm
1
I use torch.jit.script to save model and using C++ to load it:
#include <torch/script.h>
using namespace std;
int main() {
std::string model_path = "G:/modelscriptcuda.pt";
try {
torch::jit::script::Module module = torch::jit::load(model_path, torch::kCUDA);
} catch (const c10::Error& e) {
std::cerr << "error loading the model: " << e.what() << std::endl;
return -1;
}}
And I got this error:
error loading the model: [enforce fail at ..\..\caffe2\serialize\inline_container.cc:197] . file not found: modelscriptcuda/version
(no backtrace available)
How did you save the checkpoint? Based on the error message I would guess your checkpoint might be in an older format as the version
file is missing in the archive.
111408
(haodong)
March 31, 2024, 2:11am
3
I use torch.save to save my model and I trace it through:
torch.save(model.state_dict(), save_path)
` model.load_state_dict(torch.load(‘G:/chapter5nomissing.pt’))
model.eval()
model = model.float()
example_input = torch.rand(1,6, 850)
example_rawdata = torch.rand(1,6,1000)
example_index = torch.rand(1,2,30)
example_mask = torch.rand(1,1,6,1000)
print(example_input.type(),example_rawdata.type(),example_index.type(),example_mask.type())
traced_script_module = torch.jit.script(model, (example_input,example_rawdata,example_mask) )
traced_script_module.save('G:/modelscriptcuda.pt')
print("save")
load_scripted_model = torch.jit.load('G:/modelscriptcuda.pt')
print(load_scripted_model)`
And I found it is okay to load it in python
Which PyTorch and libtorch
versions are you using as it still seems they mismatch due to the unexpected checkpoint format?
111408
(haodong)
March 31, 2024, 2:44pm
5
torch == 1.11.0+cu113
– Found torch: I:/Project/CNN-1DMEAE/libtorch/lib/torch.lib
– Found Torch: 1.4.0
Update both to the same version and rerun your code again.