In C++ Exporting and Loading Torch Module as iostream fails

Dear all,

I have a problem which I cannot solve. I develop a program in C++ which uses a PyTorch model. I have no clue where to start debugging this, I hope somebody can help me or give me some input so I can continue my search for a solution.

Is this a bug in LibTorch, or am I doing something wrong?

Description
I want to Export my Torch Module and write it using a std::ostream, then I want to read my exported Torch Module using a std::istream. It does not work. Below I give some details:

For clearity, I try to use the following 2 functions:
(from …\include\torch\csrc\jit\serialization\export.h)

TORCH_API void ExportModule(
    const Module& module,
    std::ostream& out,
    const ExtraFilesMap& metadata = ExtraFilesMap(),
    bool bytecode_format = false,
    bool save_mobile_debug_info = false);

(from …\include\torch\csrc\jit\serialization\import.h)

/// Loads a serialized `Module` from the given `istream`.
///
/// The istream must contain a serialized `Module`, exported via
/// `torch::jit::ExportModule` in C++.
TORCH_API Module
load(std::istream& in, c10::optional<c10::Device> device = c10::nullopt);

To reproduce
To test, I developed a small test script.

  • In Python I have developed and trained a model. I Exported this using Torch.jit.trace(...).save(...).
  • Then, I load this TorchScript model successfully in C++ using torch::jit::load(file_path) (so not using the std::istream overload.).
  • Then, for testing purposes, I export it again using the above denoted torch::jit::ExportModule(module, ostream_out).
  • Then, for testing purposes, I import it again using the above denoted module = torch::jit::load(istream_in)

The code is below:

// Path to file
torchScriptFileFromPython = "C:/Test/MyModule.pte";


// Deserialize the ScriptModule from file as saved in Python (this works flawlessly).
std::cout << "Loading Torch Script model...\n\n";
torch::jit::script::Module module_python;
try {
	module_python = torch::jit::load(torchScriptFileFromPython);
}
catch (const c10::Error& e) {
	std::cerr << "Error during loading the model:\n\n";
	std::cerr << e.msg() << std::endl << std::endl;
	std::cerr << e.backtrace() << std::endl << std::endl;
	return -1;
}
std::cout << "Model succesfully loaded!\n\n";


// Stream the module out (this code runs but I think it creates a corrupted file).
std::string streamedFile = "C:/Test/MyModuleStreamed.pte";
{
	std::filebuf outfb;
	outfb.open(streamedFile, std::ios::out);
	std::ostream os(&outfb);
	try {
		torch::jit::ExportModule(module_python, os);
		// torch::jit::ExportModule(module_python, streamedFile);  // Suppose I would use this line instead of the previous, it would work. However, I want to use the ostream.
	}
	catch (const c10::Error& e) {
		std::cerr << "Error during export streaming the model:\n\n";
		std::cerr << e.msg() << std::endl << std::endl;
		std::cerr << e.backtrace() << std::endl << std::endl;
		return -1;
	}
	outfb.close();
	std::cout << "Model succesfully export streamed!\n\n";
}


// Stream the module in (this part raises an exception).
torch::jit::script::Module module;
{
	std::filebuf infb;
	infb.open(streamedFile, std::ios::in);
	std::istream is(&infb);
	try {
		module = torch::jit::load(is);
		//module = torch::jit::load(streamedFile);  // Suppose I would use this line instead of the previous, it would work, provided I did the same in the ExportModule part. However, I want to use the istream.
	}
	catch (const c10::Error& e) {
		std::cerr << "Error during import streaming the model:\n\n";
		std::cerr << e.msg() << std::endl << std::endl;
		std::cerr << e.backtrace() << std::endl << std::endl;
		return -1;
	}
	infb.close();
	std::cout << "Model succesfully import streamed!\n\n"; 
}

As denoted in the code comments, if instead of the istream / ostream versions, I use the overloads which accepts a std::string containing the path, it works. I want the stream however to manipulate the file before writing and undo this when reading.

The relevant headers I use:

#include <torch/script.h> // One-stop header.
#include <torch/csrc/jit/serialization/export.h>
#include <iostream>
#include <string>
#include <ostream>
#include <istream>

The Exception
The Exception I get when using the istream / ostream versions is:

istream reader failed: reading file.
Exception raised from validate at ..\..\caffe2\serialize\istream_adapter.cc:32 (most recent call first):
00007FFDA78C4A2A00007FFDA78C3AF0 c10.dll!c10::detail::LogAPIUsageFakeReturn [<unknown file> @ <unknown line number>]
00007FFDA78C458A00007FFDA78C3AF0 c10.dll!c10::detail::LogAPIUsageFakeReturn [<unknown file> @ <unknown line number>]
00007FFDA78C578100007FFDA78C3AF0 c10.dll!c10::detail::LogAPIUsageFakeReturn [<unknown file> @ <unknown line number>]
00007FFDA78C53F500007FFDA78C3AF0 c10.dll!c10::detail::LogAPIUsageFakeReturn [<unknown file> @ <unknown line number>]
00007FFDA78C2FAF00007FFDA78C2F40 c10.dll!c10::Error::Error [<unknown file> @ <unknown line number>]
00007FFDA78C1B3600007FFDA78C1A70 c10.dll!c10::detail::torchCheckFail [<unknown file> @ <unknown line number>]
00007FFD350ECD0800007FFD350ECBB0 torch_cpu.dll!caffe2::serialize::IStreamAdapter::validate [<unknown file> @ <unknown line number>]
00007FFD350ECB4200007FFD350ECA80 torch_cpu.dll!caffe2::serialize::IStreamAdapter::read [<unknown file> @ <unknown line number>]
00007FFD350E7BDE00007FFD350E7B70 torch_cpu.dll!caffe2::serialize::PyTorchStreamReader::read [<unknown file> @ <unknown line number>]
00007FFD350E7EE000007FFD350E7D00 torch_cpu.dll!caffe2::serialize::PyTorchStreamReader::getRecordID [<unknown file> @ <unknown line number>]
00007FFD350E24CD00007FFD350C2DD0 torch_cpu.dll!caffe2::Workspace::bookkeeper [<unknown file> @ <unknown line number>]
00007FFD350E280300007FFD350C2DD0 torch_cpu.dll!caffe2::Workspace::bookkeeper [<unknown file> @ <unknown line number>]
00007FFD350D304A00007FFD350C2DD0 torch_cpu.dll!caffe2::Workspace::bookkeeper [<unknown file> @ <unknown line number>]
00007FFD350E74C100007FFD350E70D0 torch_cpu.dll!caffe2::serialize::PyTorchStreamReader::init [<unknown file> @ <unknown line number>]
00007FFD350E68E900007FFD350E6850 torch_cpu.dll!caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader [<unknown file> @ <unknown line number>]
00007FFD37A044FC00007FFD379FFCB0 torch_cpu.dll!torch::jit::readArchiveAndTensors [<unknown file> @ <unknown line number>]
00007FFD37A021B600007FFD379FFCB0 torch_cpu.dll!torch::jit::readArchiveAndTensors [<unknown file> @ <unknown line number>]
00007FFD37A0BC2200007FFD379FFCB0 torch_cpu.dll!torch::jit::readArchiveAndTensors [<unknown file> @ <unknown line number>]
00007FFD379FFB4F00007FFD379FF9B0 torch_cpu.dll!torch::jit::load [<unknown file> @ <unknown line number>]
00007FFD379FF61500007FFD379FF530 torch_cpu.dll!torch::jit::load [<unknown file> @ <unknown line number>]
00007FFD379FF4E100007FFD379FF460 torch_cpu.dll!torch::jit::load [<unknown file> @ <unknown line number>]
00007FF775C039C700007FF775C03600 LibTorchTest.exe!main [D:\Default_Folders\Documents\Development\RepoStefan\PyTorchNetworkv1\LibTorchTest\src\main.cpp @ 86]
00007FF775C0606900007FF775C06030 LibTorchTest.exe!invoke_main [D:\agent\_work\13\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl @ 79]
00007FF775C05F0E00007FF775C05DE0 LibTorchTest.exe!__scrt_common_main_seh [D:\agent\_work\13\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl @ 288]
00007FF775C05DCE00007FF775C05DC0 LibTorchTest.exe!__scrt_common_main [D:\agent\_work\13\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl @ 331]
00007FF775C060FE00007FF775C060F0 LibTorchTest.exe!mainCRTStartup [D:\agent\_work\13\s\src\vctools\crt\vcstartup\src\startup\exe_main.cpp @ 17]
00007FFE0296703400007FFE02967020 KERNEL32.DLL!BaseThreadInitThunk [<unknown file> @ <unknown line number>]
00007FFE0300265100007FFE03002630 ntdll.dll!RtlUserThreadStart [<unknown file> @ <unknown line number>]

However, I think the export already writes a corrupted file.
When using the istream approach, I get the same error if I directly try to import the file which was exported by Python.

Expected behavior
The model is exported to file and loaded without errors and then I can use the loaded model to successfully call module.forward(inputs).

Environment
I’m on Windows 10. The problems occurs both when I build in debug and release mode.
I use the LTS version of LibTorch, version 1.8.1 for CPU only. I also use PyTorch version 1.8.1 in Python.
For now, the model is exported with everything on CPU.

Additional context
Why do I want to use streams to export/load the module?
I need to deploy our solution at a customer. The customer should not be able to simply get the model file, load it and see our network including all trained weights. Therefore, when writing/reading the streams, I want to apply some form of encryption so at least the file on disk is encrypted.

Any other approach to achieve this would be very welcome. However, it will not solve this problem I consider a bug.

Maybe you want to take a look at this post:

I had similar issues some time ago and using binary streams fixed the issue.

1 Like

Thanks!
Last night, I figured this out too. I might be good to add this note to the header although it is kind of logical.

I’m rather new to C++ so I have to figure out some of these things.

For future readers, replace the two lines:

outfb.open(streamedFile, std::ios::out);
outfb.open(streamedFile, std::ios::in);

by:

outfb.open(streamedFile, std::ios::binary | std::ios::out);
outfb.open(streamedFile, std::ios::binary | std::ios::in);

and it works.

A file is opened in text mode by default and then, on windows, everytime a byte passes which corresponds to the ‘\n’ character, it is replaced by two bytes ‘\r\n’ causing a sort of corrupt file.

How do I close this topic? Or somebody can close it for me.

1 Like

Hi, do you mean infb.open(streamedFile, std::ios::binary | std::ios::in); not outfb.open(streamedFile, std::ios::binary | std::ios::in);