Unknown type name '__torch__.Module'


The following code works with 1.2.0 but not with 1.3.0

    torch::serialize::OutputArchive archive;
        torch::serialize::OutputArchive slot;
        archive.write(KEY_MODULE, slot);
    torch::serialize::InputArchive archive;
    archive.load_from(filename, tensors::device()); // tensors::device() returns the correct device
Unknown type name '__torch__.Module':
at code/__torch__.py:47:60
import __torch__.___torch_mangle_5
import __torch__.___torch_mangle_6
import __torch__.___torch_mangle_7
import __torch__.___torch_mangle_8
import __torch__.___torch_mangle_9
import __torch__.___torch_mangle_10
class Module(Module):
  __parameters__ = ["a6d20b65-2d16-4fbd-bce3-6886baed0c39", "2779f61f-c777-452e-a375-64c2c247be72", ]
  __annotations__ = []
  __annotations__["bc95c774-407e-4b95-91b7-9692585d89d0"] = __torch__.Module
                                                            ~~~~~~~~~~~~~~~~ <--- HERE
  __annotations__["a6d20b65-2d16-4fbd-bce3-6886baed0c39"] = Tensor
  __annotations__["2779f61f-c777-452e-a375-64c2c247be72"] = Tensor
Compiled from code 

Is there something obvious that I am missing ?

Thank you,


What is __torch__.py supposed to be? This is some of your code right? It looks like the Module class there got removed.

Hello Alban,

Thank you for your help.
My code is plain C++, I am not using python at all so I have no idea of what torch.py is supposed to be.
I guess this is related to jit.


Where the sample of python code that you posted above come from?

This is the message for the exception thrown by load_from

archive.load_from(filename); // <= throws

Can you try creating a compilation unit that all the OutputArchives share? Something like:

    auto cu = std::make_shared<torch::jit::script::CompilationUnit>();
    torch::serialize::OutputArchive archive(cu);
        torch::serialize::OutputArchive slot(cu);
        archive.write(KEY_MODULE, slot);

It works fine now.
Thank you Michael