@Carol_Ye_Liu
Can you check the pytorch version you installed? It seems to me that when you test pytorch 1.6 + libtorch 1.6, the actual pytorch you were using might not be 1.6.
You can print out the pytorch version by
import torch
print (torch.__version__)
I tried on my machine with latest pytorch + libtorch. It works for me.
If you are trying to store a tensor instead of a module.
You can use pickle_save in c++ and torch.load() in python.
here is the sample code in c++, python is very obvious.
torch::Tensor ten = torch::rand({12, 12});
auto bytes = torch::pickle_save(ten); //this is actually a std::vector of char
std::ofstream fout("ten.zip", std::ios::out | std::ios::binary);
fout.write(bytes.data(), bytes.size());
fout.close()
And in python you can do
import torch
mytensor = torch.load("ten.zip")
If you are trying to save a module using jit.
Then you should do what you did before in c++ and load it using torch.jit.load.
But this time, what you are loading is a module obj instead of a Tensor.
If I use torch.save to save your fake_img tensor, here is what I get when loading it
In [1]: import torch
In [2]: a = torch.jit.load("sample.img")
In [3]: a
Out[3]: RecursiveScriptModule(original_name=Module)
In [4]: a.__dict__
Out[4]:
{'_initializing': False,
'_c': <torch._C.ScriptModule at 0x7f07c609bb30>,
'_parameters': <torch.jit._script.OrderedDictWrapper at 0x7f08232c0a90>,
'_buffers': <torch.jit._script.OrderedDictWrapper at 0x7f07c655f940>,
'_non_persistent_buffers_set': set(),
'_backward_hooks': OrderedDict(),
'_forward_hooks': OrderedDict(),
'_forward_pre_hooks': OrderedDict(),
'_state_dict_hooks': OrderedDict(),
'_load_state_dict_pre_hooks': OrderedDict(),
'_modules': <torch.jit._script.OrderedModuleDict at 0x7f08231ae490>,
'_concrete_type': <torch._C.ConcreteModuleType at 0x7f0820bcf7b0>}
In [5]: a._parameters.keys()
Out[5]: ['0']
Your tensor will be in
a._parameters['0']