Using jit to deploy Encoder Decoder model

I currently have a trained model for a RNN Encoder Decoder model and I’m trying to follow the tutorial for deploying code using jit and C++ here https://pytorch.org/tutorials/advanced/cpp_export.html . For the example inputs, I’m supposed to pass in what one normally puts for the forward pass from the model; however, I am getting the following error:

SyntaxError: invalid syntax
>>> traced_script_module = torch.jit.trace(model, (example1, example2))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.7/site-packages/torch/jit/__init__.py", line 634, in trace
    module = TopLevelTracedModule(func, **executor_options)
  File "/usr/local/lib/python3.7/site-packages/torch/jit/__init__.py", line 963, in init_then_register
    original_init(self, *args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/torch/jit/__init__.py", line 963, in init_then_register
    original_init(self, *args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/torch/jit/__init__.py", line 1316, in __init__
    self._name = orig.__name__
AttributeError: 'dict' object has no attribute '__name__'

Here is my code from the forward pass of the model

def forward(self, input, hidden, return_hiddens=False, noise=False):
        # input = (torch.randn(1,2))
        # hidden = (torch.randn(2,1,32), torch.randn(2,1,32))
        print('Input tuple length', len(input))
        print('Hidden tuple length', len(hidden))
        #print(input[0])
        #print(hidden[0].size())
        #print(hidden)
        #print(hidden[1].size())

        emb = self.drop(self.encoder(input.contiguous().view(-1, self.enc_input_size)))
        emb = emb.view(-1, input.size(1), self.rnn_hid_size)  # [ seq_len * batch_size * feature_size]
        if noise:
            hidden = (F.dropout(hidden[0], training=True, p=0.9), F.dropout(hidden[1], training=True, p=0.9))

        output, hidden = self.rnn(emb, hidden)
        output = self.drop(output)
        # [(seq_len * batch_size) * feature_size]
        decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))
        decoded = decoded.view(output.size(0), output.size(1), decoded.size(1)) # [seq_len * batch_size * feature_size]
        if self.res_connection:
            decoded = decoded + input
        if return_hiddens:
            return decoded, hidden, output

        return decoded, hidden

Here is the code I used when trying to use torch.jit.trace

model = torch.load('./gcloud-gpu-results/save/ecg/model_best/chfdb_chf13_45590.pth', map_location='cpu')
example1 = (torch.randn(1,2))
example2 = (torch.randn(2,1,32), torch.randn(2,1,32))
traced_script_module = torch.jit.trace(model, example1, example2))

Let me know, if I should provide any more information or be more clear. Thank you!

Could you check what the type object you are torch.load()ing is? It seems that the tracing path thinks it’s a dict for some reason.