Problem converting a model via Annotation

I want to convert the model but I meet some problems:

RuntimeError: 
could not export python function call <python_value>. Remove calls to python functions before export.:
    mask : ``torch.ByteTensor`` , required.
        The mask for character-level input.
    """
    w_emb = self.word_embed(w_in)

    c_emb = self.char_embed(c_in)

    emb = self.drop( torch.cat([w_emb, c_emb], 2) )

    out = self.rnn(emb)
          ~~~~~~~~ <--- HERE

    mask = mask.unsqueeze(2).expand_as(out)

    out = out.masked_select(mask).view(-1, self.rnn_outdim)

    return out

Here is the code:

def __init__(self, rnn, 
                w_num: int, 
                w_dim: int, 
                c_num: int, 
                c_dim: int, 
                y_dim: int, 
                y_num: int, 
                droprate: float):

        super(NER, self).__init__()

        self.rnn = rnn
        self.rnn_outdim = self.rnn.output_dim
        self.one_direction_dim = self.rnn_outdim // 2
        self.word_embed = nn.Embedding(w_num, w_dim)
        self.char_embed = nn.Embedding(c_num, c_dim)
        self.drop = nn.Dropout(p=droprate)
        self.add_proj = y_dim > 0
        self.to_chunk = highway(self.rnn_outdim)
        self.to_type = highway(self.rnn_outdim)

        if self.add_proj:
            self.to_chunk_proj = nn.Linear(self.rnn_outdim, y_dim)
            self.to_type_proj = nn.Linear(self.rnn_outdim, y_dim)
            self.chunk_weight = nn.Linear(y_dim, 1)
            self.type_weight = nn.Linear(y_dim, y_num)
            self.chunk_layer = nn.Sequential(self.to_chunk, self.drop, self.to_chunk_proj, self.drop, self.chunk_weight)
            self.type_layer = nn.Sequential(self.to_type, self.drop, self.to_type_proj, self.drop, self.type_weight)
        else:
            self.chunk_weight = nn.Linear(self.rnn_outdim, 1)
            self.type_weight = nn.Linear(self.rnn_outdim, y_num)
            self.chunk_layer = nn.Sequential(self.to_chunk, self.drop, self.chunk_weight)
            self.type_layer = nn.Sequential(self.to_type, self.drop, self.type_weight)

    @torch.jit.script_method
    def forward(self, w_in, c_in, mask):
        """
        Sequence labeling model.

        Parameters
        ----------
        w_in : ``torch.LongTensor``, required.
            The RNN unit.
        c_in : ``torch.LongTensor`` , required.
            The number of characters.
        mask : ``torch.ByteTensor`` , required.
            The mask for character-level input.
        """
        w_emb = self.word_embed(w_in)

        c_emb = self.char_embed(c_in)

        emb = self.drop( torch.cat([w_emb, c_emb], 2) )

        out = self.rnn(emb)

        mask = mask.unsqueeze(2).expand_as(out)

        out = out.masked_select(mask).view(-1, self.rnn_outdim)

        return out
    rnn_map = {'Basic': BasicRNN}
    rnn_layer = rnn_map[args.rnn_layer](args.layer_num, args.rnn_unit, args.word_dim + args.char_dim, args.hid_dim, args.droprate, args.batch_norm)
    ner_model = NER(rnn_layer, len(w_map), args.word_dim, len(c_map), args.char_dim, args.label_dim, len(tl_map), args.droprate)
    ner_model.load_state_dict(model)
    ner_model.to(device)
    ner_model.eval()
 def __init__(self, unit, input_dim, hid_dim, droprate, batch_norm):
          super(BasicUnit, self).__init__(  
          self.unit_type = unit
          rnnunit_map = {'rnn': nn.RNN, 'lstm': nn.LSTM, 'gru': nn.GRU}
          self.layer = torch.jit.trace(nn.LSTM(input_dim, hid_dim//2, 1, batch_first=True, bidirectional=True), torch.randn(500, 1, input_dim))
             -->I trace the lstm here
          self.droprate = droprate
          self.batch_norm = batch_norm
          if self.batch_norm:
              self.bn = nn.BatchNorm1d(hid_dim)
          self.output_dim = hid_dim
  
          self.init_hidden()

How can I convert it? My pytorch version is 1.0.0.

Hi, the error is saying if you want to export(serialize) your model to disk, you will need to convert all your models to TorchScript (either via tracing or scripting). Right now your model only have partially converted to TorchScript. For how to convert, here is our doc https://pytorch.org/docs/stable/jit.html, I also recommend you to try our new API if you stay on our nightly builds.

Thank you for your reply. I am a freshman, and I have read the doc, but I think I have done the conversion. So, could you please tell me what’ wrong with my conversion. Thank you again!

Another question:

RuntimeError: 
cannot call a value:
    Returns
    ----------
    output: ``torch.FloatTensor``.   
        The output of RNNs.
    """
    out, _ = self.layer(x)

    if self.batch_norm:
        output_size = out.size()
        out = self.bn(out.view(-1, self.output_dim)).view(output_size)
              ~~~~~~~ <--- HERE

    if self.droprate > 0:
        out = F.dropout(out, p=self.droprate, training=self.training)

    return out

The code is:

@torch.jit.script_method
      def forward(self, x):
          """
          Calculate the output.
  
          Parameters
          ----------
          x : ``torch.LongTensor``, required.
              the input tensor, of shape (seq_len, batch_size, input_dim).
 
          Returns
          ----------
          output: ``torch.FloatTensor``.   
             The output of RNNs.
         """
          out, _ = self.layer(x)
  
          if self.batch_norm:
              output_size = out.size()
              out = self.bn(out.view(-1, self.output_dim)).view(output_size)
  
          if self.droprate > 0:
              out = F.dropout(out, p=self.droprate, training=self.training)
  
          return out

The init function is the fourth one in the first post.

One more question, I get an error message with:

RuntimeError: 
could not export python function call <python_value>. Remove calls to python functions before export.:

        def forward(self, input):
            for m in self:
                input = m(input)
                        ~ <--- HERE
            return input

But I can’t find this forward function in my code. Why does this happen? Please help me!