TorchScript model's output is wrong when loads from CPP

Hi Team,
I have a very simple fizbuz model made using pytorch python APIs which I have exported as ScriptModule. I am loading the same module from python and CPP and passing same input but getting wrong output in CPP. In fact, regardless of what ever input I pass, I get the exactly same values in the output from CPP
Here is my python and CPP code for the same

PS: I am a CPP noob

# ~/myp/HOD/8.P/FizBuzTorchScript> python fizbuz.py fizbuz_model.pt 2

import sys
import torch


def main():
    net = torch.jit.load(sys.argv[1])
    temp = [int(i) for i in '{0:b}'.format(int(sys.argv[2]))]
    array = [0] * (10 - len(temp)) + temp
    inputs = torch.Tensor([array])
    print(inputs)  # tensor([[0., 0., 0., 0., 0., 0., 0., 0., 1., 0.]])
    output = net(inputs)
    print(output)  # tensor([[ -1.8873, -17.1001,  -3.7774,   3.7985]], ...


if __name__ == '__main__':
    main()

// ~/myp/HOD/8.P/FizBuzTorchScript/build> ./fizbuz ../fizbuz_model.pt 2

#include <torch/script.h>

#include <iostream>
#include <memory>
#include <string>

int main(int argc, const char* argv[]) {
	if (argc != 3) {
		std::cerr << "usage: <appname> <path> <int>\n";
		return -1;
	}
	std::string arg = argv[2];
	int x = std::stoi(arg);
	int array[10];

	int i;
	int j = 9;
	for (i = 0; i < 10; ++i) {
	    array[j] = (x >> i) & 1;
	    j--;
	}
	std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[1]);
	torch::Tensor tensor_in = torch::from_blob(array, {1, 10});
	std::vector<torch::jit::IValue> inputs;
	inputs.push_back(tensor_in);
	std::cout << inputs << '\n';
	/*
		1e-45 *
			 0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  0.0000  1.4013  0.0000
			[ Variable[CPUFloatType]{1,10} ]
	*/


	at::Tensor output = module->forward(inputs).toTensor();
	std::cout << output << '\n';

	/*
		 3.7295 -23.8977 -8.2652 -1.3901
			[ Variable[CPUFloatType]{1,4} ]
	*/
}

Here is the model, if it helps

input_size = 10
output_size = 4
hidden_size = 100


class FizBuzNet(nn.Module):
    """
    2 layer network for predicting fiz or buz
    param: input_size -> int
    param: output_size -> int
    """

    def __init__(self, input_size, hidden_size, output_size):
        super(FizBuzNet, self).__init__()
        self.hidden = nn.Linear(input_size, hidden_size)
        self.out = nn.Linear(hidden_size, output_size)

    def forward(self, batch):
        hidden = self.hidden(batch)
        activated = torch.sigmoid(hidden)
        out = self.out(activated)
        return out

From the docs, from_blob takes a void* and the (optional) TensorOptions specify the type, probably defaulting to float. So maybe declaring array to be a float array works better.

Best regards

Thomas

That did not help :frowning:
Does it look like a bug in the JIT module (I know, highly unlikely) or am I doing something wrong?
Also for what ever input I pass, I get this exactly same output

 3.7295 -23.8977 -8.2652 -1.3901
[ Variable[CPUFloatType]{1,4} ]

As Thomas said, you probably have to make array a float, you have it as int array[10].
If you have int array[10] and re-interpret it as a float, it’s probably going to have weird floats come out on the other side.

1 Like

Thanks a ton @smth @tom. That worked. In fact, I got the answer two minutes ago from @lantiga and was about to post here.

ahah, yes, I was posting here when I saw @smth reply live :slight_smile:
I confirm that using float array[10] fixes it.

1 Like