How to convert the model with no fixed size input to Torch Script?

If the size of input image is not fixed, how to convert the model to torch script?

You should use the @torch.jit.script annotation, which will recover the full semantics of your model (including any control flow that depends on input size). See the tutorial for more info:

Thanks. But I got an error.

my code is like this:

class BidirectionalLSTM(torch.jit.ScriptModule):
    __constants__ = ['rnn']
    # Inputs hidden units Out
    def __init__(self, nIn, nHidden, nOut):
        super(BidirectionalLSTM, self).__init__()

        self.rnn = nn.LSTM(nIn, nHidden, bidirectional=True)
        self.embedding = nn.Linear(nHidden * 2, nOut)

    def forward(self, input):
        recurrent, _ = self.rnn(input)
        T, b, h = recurrent.size()
        t_rec = recurrent.view(T * b, h)

        output = self.embedding(t_rec)  # [T * b, nOut]
        output = output.view(T, b, -1)

        return output

xx = BidirectionalLSTM(256, 512, 512)"")

got an error

could not export python function call <python_value>. Remove calls to python functions before export.:
def forward(self, input):
    recurrent, _ = self.rnn(input)
                   ~~~~~~~~ <--- HERE

I add one line of code:
__constants__ = [ 'rnn']

and got this:

TypeError: 'LSTM' object for attribute 'rnn' is not a valid constant.
Valid constants are:
  1. a nn.ModuleList
  2. a value of type {bool, float, int, str, NoneType, function, device, layout, dtype}
  3. a list or tuple of (2)