INTERNAL ASSERT FAILED at "../aten/src/ATen/core/jit_type_base.h":172

Here is the code snippet to make inference on torch script model.

	// Prepare inputs
	std::vector<torch::jit::IValue> input_tensor_list;
	for (int n=0; n<input_list.size(); ++n)	{
		auto blob = mCPU ? input_list[n]->mutable_cpu_data() : input_list[n]->mutable_gpu_data();
		vector<long> long_shape(begin(input_list[n]->shape), end(input_list[n]->shape));
		at::IntArrayRef shape(long_shape.data(), long_shape.size());
		auto options = torch::TensorOptions().dtype(torch::kFloat32);
		input_tensor_list.emplace_back(torch::from_blob(blob, shape, options));
		// Debug
		cout << input_tensor_list[n].toTensor() << "\n";
	}

	// Make inference
//	try	{
		auto output = module->forward(input_tensor_list); //.toTensorVector();

I can print out the input tensor but get assertion fail. I am new to torch c++ api.

Is there any way to check whether the module is properly loaded or print it on screen like python?
Any idea to point out the error is very welcome.

-0.1138 -0.1138 -0.0441 -0.0964 -0.1138 -0.0267 0.0431 0.1476
0.2871 0.1825 0.1128 0.1302 0.2522 0.3393 0.3219 0.2696
0.3742 0.2871 0.1651 0.1651 0.2348 0.2173 0.3045 0.2522
0.3916 0.3045 0.2696 0.2348 0.2696 0.3045 0.2522 0.2348
0.3568 0.2348 0.2173 0.1999 0.1302 0.1651 0.1128 0.1999
0.5485 0.5311 0.4788 0.4091 0.2522 0.1128 0.0953 0.0605
[ CPUFloatType{1,3,224,224} ]
terminate called after throwing an instance of ‘c10::Error’
what(): r INTERNAL ASSERT FAILED at “…/aten/src/ATen/core/jit_type_base.h”:172, please report a bug to PyTorch.
Exception raised from expect at …/aten/src/ATen/core/jit_type_base.h:172 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x69 (0x7fcba2b5ab29 in /opt/libtorch/lib/libc10.so)
frame #1: std::shared_ptrc10::ClassType c10::Type::expectc10::ClassType() + 0xbd (0x7fcc3d8c3ead in /opt/libtorch/lib/libtorch_cpu.so)
frame #2: c10::ivalue::Object::type() const + 0x21 (0x7fcc3d8b6341 in /opt/libtorch/lib/libtorch_cpu.so)
frame #3: torch::jit::Object::find_method(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) const + 0x4e (0x7fcc3ff1ceae in /opt/libtorch/lib/libtorch_cpu.so)
frame #4: torch::jit::Object::get_method(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) const + 0x50 (0x7fccb45bc9c6 in /home/bozkalayci/workspace/ws_Puhu/xDNN/Debug/libxDNN_d.so)
frame #5: torch::jit::Module::forward(std::vector<c10::IValue, std::allocatorc10::IValue >) + 0x77 (0x7fccb45bcdf1 in /home/bozkalayci/workspace/ws_Puhu/xDNN/Debug/libxDNN_d.so)

Could you create an issue in GitHub as this seems to be an internal error, please?
An executable code snippet to reproduce this issue would be great :slight_smile:

I found a work around but the problem I guess is in loading the module.

I got the above error when loaded the module to a pointer like this in an init function of a class instance.

auto *module = new torch::jit::script::Module;
*module = torch::jit::load(fileName);

module is cast to void pointer and kept as a member variable of the class. I got the error in a class’ method call. I recast the void pointer to torch::jit::script::Module but some arrays are violated in this way I guess.

The work around is to use the conventional way

module = torch::jit::load(fileName);

What is your suggestion to keep a valid module pointer in a class member?