Pytorch Convert Model into C++

Hi folks,
I have a model contains ModuleDict which requires key to be string type. But channel_id is int type. Thus, inside forward() function, I use str(channel_id) to convert from int type to string type and then use it as ModuleDict’s key. This is fine.

Problem arises when converting the model into C++ using trace, the accepted argument MUST be a Tensor which does NOT support string data type. Thus, I use channel_id as a Tensor of int64. However, this channel_id is a Tensor with single int64 value, while in the forward function of model class, the channel_id argument is a single int64 value instead of a Tensor.

Of course, we can modify the model class’s forward function to accept channel_id as tensor instead of a single int64 value just to be able to trace the model, but it needs a lot of changes.

Is there a better way to keep the existing model code, and able to trace it?

channel_id = torch.ones(1, dtype=torch.int64)
traced_script_module = torch.jit.trace(model, (premise, premise_length, hypotheses, hypotheses_length, channel_id))

Hi @marchss,

please don’t tag certain people, as this discourages others to answer, while the tagged persons also might not even be an expert on this topic. :wink:

Sorry about that. I often see great answers from you folks and thus did that. Will avoid this.

@marchss As this problem is more related to what JIT accepts as input to a traced module, could you post it in https://github.com/pytorch/pytorch/issues so that we can discuss potential solutions with more people?

The problem is that the traced model will treat non-tensor inputs as constants. This is not really changeable.
So you would want to mix script and trace as discussed in the other thread.

Best regards

Thomas

1 Like